What is Pneumonia?
Pneumonia is an infection in one or both lungs. Bacteria, viruses, and fungi cause it. The infection causes inflammation in the air sacs in your lungs, which are called alveoli.
Pneumonia accounts for over 15% of all deaths of children under 5 years old internationally. In 2017, 920,000 children under the age of 5 died from the disease. It requires review of a chest radiograph (CXR) by highly trained specialists and confirmation through clinical history, vital signs and laboratory exams. Pneumonia usually manifests as an area or areas of increased opacity on CXR. However, the diagnosis of pneumonia on CXR is complicated because of a number of other conditions in the lungs such as fluid overload (pulmonary edema), bleeding, volume loss (atelectasis or collapse), lung cancer, or postradiation or surgical changes. Outside of the lungs, fluid in the pleural space (pleural effusion) also appears as increased opacity on CXR. When available, comparison of CXRs of the patient taken at different time points and correlation with clinical symptoms and history are helpful in making the diagnosis.
CXRs are the most commonly performed diagnostic imaging study. A number of factors such as positioning of the patient and depth of inspiration can alter the appearance of the CXR, complicating interpretation further. In addition, clinicians are faced with reading high volumes of images every shift.
Pneumonia Detection
Now to detection Pneumonia we need to detect Inflammation of the lungs. In this project, you’re challenged to build an algorithm to detect a visual signal for pneumonia in medical images. Specifically, your algorithm needs to automatically locate lung opacities on chest radiographs.
Business Domain Value:
Automating Pneumonia screening in chest radiographs, providing affected area details through bounding box.
Assist physicians to make better clinical decisions or even replace human judgement in certain functional areas of healthcare (eg, radiology). Guided by relevant clinical questions, powerful AI techniques can unlock clinically relevant information hidden in the massive amount of data, which in turn can assist clinical decision making.
Project Description:
In this capstone project, the goal is to build a pneumonia detection system, to locate the position of inflammation in an image.
Tissues with sparse material, such as lungs which are full of air, do not absorb the X-rays and appear black in the image. Dense tissues such as bones absorb X-rays and appear white in the image.
While we are theoretically detecting “lung opacities”, there are lung opacities that are not pneumonia related.
In the data, some of these are labeled “Not Normal No Lung Opacity”. This extra third class indicates that while pneumonia was determined not to be present, there was nonetheless some type of abnormality on the image and oftentimes this finding may mimic the appearance of true pneumonia.
Dicom original images: - Medical images are stored in a special format called DICOM files (*.dcm). They contain a combination of header metadata as well as underlying raw image arrays for pixel data.
Details about the data and dataset files are given in below link,
https://www.kaggle.com/c/rsna-pneumonia-detection-challenge/data
# loading the necessary libraries
import tensorflow as tf
from tensorflow import keras
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import matplotlib.patches as patches
import math
import pydicom as dcm
#Initialize the random number generator
import random as rd
rd.seed(0)
#Ignore the warnings
import warnings
warnings.filterwarnings("ignore")
import os
import cv2
from glob import glob
# from google.colab import drive # import drive from google colab
# drive.mount('/content/drive',force_remount=True) # default location for the drive
# os.chdir("/content/drive/My Drive/CV/CapstoneProject/") # we mount the google drive at /content/drive and change dir to this
from sklearn.model_selection import train_test_split
from tensorflow.keras.layers import Layer, Convolution2D, Flatten, Dense
from tensorflow.keras.layers import Concatenate, UpSampling2D, Conv2D, Reshape, GlobalAveragePooling2D
from tensorflow.keras.models import Model
import tensorflow.keras.utils as pltUtil
from tensorflow.keras.applications.mobilenet import preprocess_input
from keras.models import Sequential # initial NN
from keras.layers import Dense, Dropout # construct each layer
from keras.layers import Conv2D # swipe across the image by 1
from keras.layers import MaxPooling2D # swipe across by pool size
from keras.layers import Flatten, GlobalAveragePooling2D,GlobalMaxPooling2D
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import Adam
from keras.models import Sequential
from keras.layers import Dense, Conv2D , MaxPool2D , Flatten , Dropout , BatchNormalization
from keras.preprocessing.image import ImageDataGenerator
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report,confusion_matrix
from keras.callbacks import ReduceLROnPlateau
from tensorflow.keras.layers import Conv2D, Activation, BatchNormalization ,Conv2DTranspose
from tensorflow.keras.layers import UpSampling2D, Input, Concatenate
from tensorflow.keras.models import Model
from tensorflow.keras.applications import MobileNet , VGG19
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau
from tensorflow.keras.metrics import Recall, Precision
from tensorflow.keras import backend as K
from PIL import Image
from numpy import asarray
from tensorflow.keras.applications.mobilenet import MobileNet
from tensorflow.keras.layers import Concatenate, UpSampling2D, Conv2D, Reshape, GlobalAveragePooling2D
os.chdir("/Users/seenu/tensorflow/rsna-pneumonia-detection-challenge/")
# Reading the train labels dataset and showing first 5 records of this train labels
data= pd.read_csv ("stage_2_train_labels.csv")
data.head()
| patientId | x | y | width | height | Target | |
|---|---|---|---|---|---|---|
| 0 | 0004cfab-14fd-4e49-80ba-63a80b6bddd6 | NaN | NaN | NaN | NaN | 0 |
| 1 | 00313ee0-9eaa-42f4-b0ab-c148ed3241cd | NaN | NaN | NaN | NaN | 0 |
| 2 | 00322d4d-1c29-4943-afc9-b6754be640eb | NaN | NaN | NaN | NaN | 0 |
| 3 | 003d8fa0-6bf1-40ed-b54c-ac657f8495c5 | NaN | NaN | NaN | NaN | 0 |
| 4 | 00436515-870c-4b36-a041-de91049b9ab4 | 264.0 | 152.0 | 213.0 | 379.0 | 1 |
data.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 30227 entries, 0 to 30226 Data columns (total 6 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 patientId 30227 non-null object 1 x 9555 non-null float64 2 y 9555 non-null float64 3 width 9555 non-null float64 4 height 9555 non-null float64 5 Target 30227 non-null int64 dtypes: float64(4), int64(1), object(1) memory usage: 1.4+ MB
Train dataset comprises of 30227 patients details but coordinates of bounding boxes are given
Only for 9555 patients
Hence remaining are considered as null values
data.shape
(30227, 6)
# Detecting missing values
data.isnull().sum()
# below data reflects that 20672 chest x rays having null values and dont have any bounding boxes
patientId 0 x 20672 y 20672 width 20672 height 20672 Target 0 dtype: int64
# counting the Target those having chest X rays with out bounding boxes
data[data.isnull().any(axis=1)].Target.value_counts()
0 20672 Name: Target, dtype: int64
# counting the Target those having chest X rays with bounding boxes
data[~data.isnull().any(axis=1)].Target.value_counts()
1 9555 Name: Target, dtype: int64
data[data.isnull().any(axis=1)]
| patientId | x | y | width | height | Target | |
|---|---|---|---|---|---|---|
| 0 | 0004cfab-14fd-4e49-80ba-63a80b6bddd6 | NaN | NaN | NaN | NaN | 0 |
| 1 | 00313ee0-9eaa-42f4-b0ab-c148ed3241cd | NaN | NaN | NaN | NaN | 0 |
| 2 | 00322d4d-1c29-4943-afc9-b6754be640eb | NaN | NaN | NaN | NaN | 0 |
| 3 | 003d8fa0-6bf1-40ed-b54c-ac657f8495c5 | NaN | NaN | NaN | NaN | 0 |
| 6 | 00569f44-917d-4c86-a842-81832af98c30 | NaN | NaN | NaN | NaN | 0 |
| ... | ... | ... | ... | ... | ... | ... |
| 30216 | c1cf3255-d734-4980-bfe0-967902ad7ed9 | NaN | NaN | NaN | NaN | 0 |
| 30217 | c1e228e4-b7b4-432b-a735-36c48fdb806f | NaN | NaN | NaN | NaN | 0 |
| 30218 | c1e3eb82-c55a-471f-a57f-fe1a823469da | NaN | NaN | NaN | NaN | 0 |
| 30223 | c1edf42b-5958-47ff-a1e7-4f23d99583ba | NaN | NaN | NaN | NaN | 0 |
| 30224 | c1f6b555-2eb1-4231-98f6-50a963976431 | NaN | NaN | NaN | NaN | 0 |
20672 rows × 6 columns
It has been found that 20672 patients with target 0- dont have pnemonia (chest X rays with out bounding boxes)
9555 patients with target 1- having pnemonia (chest X rays with bounding boxes)
# checking if there are unique values of patient ID
data["patientId"].is_unique
False
# checking the duplicate patient ids because corrdinates of bounding boxes might be duplicates
data["patientId"].duplicated().sum()
3543
duplicatept= data[data["patientId"].duplicated()]
duplicatept.head(5)
| patientId | x | y | width | height | Target | |
|---|---|---|---|---|---|---|
| 5 | 00436515-870c-4b36-a041-de91049b9ab4 | 562.0 | 152.0 | 256.0 | 453.0 | 1 |
| 9 | 00704310-78a8-4b38-8475-49f4573b2dbb | 695.0 | 575.0 | 162.0 | 137.0 | 1 |
| 15 | 00aecb01-a116-45a2-956c-08d2fa55433f | 547.0 | 299.0 | 119.0 | 165.0 | 1 |
| 17 | 00c0b293-48e7-4e16-ac76-9269ba535a62 | 650.0 | 511.0 | 206.0 | 284.0 | 1 |
| 20 | 00f08de1-517e-4652-a04f-d1dc9ee48593 | 571.0 | 275.0 | 230.0 | 476.0 | 1 |
# checking the duplicate patient ID has how many bounding boxes
data[data.patientId=="00704310-78a8-4b38-8475-49f4573b2dbb"]
| patientId | x | y | width | height | Target | |
|---|---|---|---|---|---|---|
| 8 | 00704310-78a8-4b38-8475-49f4573b2dbb | 323.0 | 577.0 | 160.0 | 104.0 | 1 |
| 9 | 00704310-78a8-4b38-8475-49f4573b2dbb | 695.0 | 575.0 | 162.0 | 137.0 | 1 |
data[data.patientId=="00c0b293-48e7-4e16-ac76-9269ba535a62"]
| patientId | x | y | width | height | Target | |
|---|---|---|---|---|---|---|
| 16 | 00c0b293-48e7-4e16-ac76-9269ba535a62 | 306.0 | 544.0 | 168.0 | 244.0 | 1 |
| 17 | 00c0b293-48e7-4e16-ac76-9269ba535a62 | 650.0 | 511.0 | 206.0 | 284.0 | 1 |
# checking the count of Target column
data.Target.value_counts()
0 20672 1 9555 Name: Target, dtype: int64
# Reading the detailed class info
det_class= pd.read_csv(("stage_2_detailed_class_info.csv"))
det_class.head()
| patientId | class | |
|---|---|---|
| 0 | 0004cfab-14fd-4e49-80ba-63a80b6bddd6 | No Lung Opacity / Not Normal |
| 1 | 00313ee0-9eaa-42f4-b0ab-c148ed3241cd | No Lung Opacity / Not Normal |
| 2 | 00322d4d-1c29-4943-afc9-b6754be640eb | No Lung Opacity / Not Normal |
| 3 | 003d8fa0-6bf1-40ed-b54c-ac657f8495c5 | Normal |
| 4 | 00436515-870c-4b36-a041-de91049b9ab4 | Lung Opacity |
# checking the shape
det_class.shape
(30227, 2)
# checking the info
det_class.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 30227 entries, 0 to 30226 Data columns (total 2 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 patientId 30227 non-null object 1 class 30227 non-null object dtypes: object(2) memory usage: 472.4+ KB
# checking the null values
det_class.isnull().sum()
patientId 0 class 0 dtype: int64
# checking the duplicate values
det_class.patientId.duplicated().sum()
3543
# checking the duplicate patients
duplicate_class= det_class[det_class.patientId.duplicated()]
duplicate_class.head(5)
| patientId | class | |
|---|---|---|
| 5 | 00436515-870c-4b36-a041-de91049b9ab4 | Lung Opacity |
| 9 | 00704310-78a8-4b38-8475-49f4573b2dbb | Lung Opacity |
| 15 | 00aecb01-a116-45a2-956c-08d2fa55433f | Lung Opacity |
| 17 | 00c0b293-48e7-4e16-ac76-9269ba535a62 | Lung Opacity |
| 20 | 00f08de1-517e-4652-a04f-d1dc9ee48593 | Lung Opacity |
# checking the duplicate patients with their respective classes
det_class[det_class.patientId=="00704310-78a8-4b38-8475-49f4573b2dbb"]
| patientId | class | |
|---|---|---|
| 8 | 00704310-78a8-4b38-8475-49f4573b2dbb | Lung Opacity |
| 9 | 00704310-78a8-4b38-8475-49f4573b2dbb | Lung Opacity |
det_class[det_class.patientId=="00c0b293-48e7-4e16-ac76-9269ba535a62"]
| patientId | class | |
|---|---|---|
| 16 | 00c0b293-48e7-4e16-ac76-9269ba535a62 | Lung Opacity |
| 17 | 00c0b293-48e7-4e16-ac76-9269ba535a62 | Lung Opacity |
# checking the count of class column
det_class["class"].value_counts()
No Lung Opacity / Not Normal 11821 Lung Opacity 9555 Normal 8851 Name: class, dtype: int64
print('Lets check the distribution of `Target` and `class` column'); print('--'*40)
fig = plt.figure(figsize = (10, 6))
ax = fig.add_subplot(121)
g = (data['Target'].value_counts()
.plot(kind = 'pie', autopct = '%.0f%%',
labels = ['Negative', 'Pneumonia Evidence'],
colors = ['violet', 'gainsboro'],
startangle = 90,
title = 'Distribution of Target', fontsize = 12)
.set_ylabel(''))
ax = fig.add_subplot(122)
g = (det_class['class'].value_counts().sort_index(ascending = False)
.plot(kind = 'pie', autopct = '%.0f%%',
colors = ['lightblue', 'lightgreen', 'brown'],
startangle = 90, title = 'Distribution of Class',
fontsize = 12)
.set_ylabel(''))
plt.tight_layout()
Lets check the distribution of `Target` and `class` column --------------------------------------------------------------------------------
print('Let\'s group by patient IDs and check number of bounding boxes for each unique patient ID');print('--'*44)
b_boxes = data[~data.isnull().any(axis=1)].groupby('patientId').size().to_frame('number_of_boxes').reset_index()
Let's group by patient IDs and check number of bounding boxes for each unique patient ID ----------------------------------------------------------------------------------------
b_boxes
| patientId | number_of_boxes | |
|---|---|---|
| 0 | 000db696-cf54-4385-b10b-6b16fbb3f985 | 2 |
| 1 | 000fe35a-2649-43d4-b027-e67796d412e0 | 2 |
| 2 | 001031d9-f904-4a23-b3e5-2c088acd19c6 | 2 |
| 3 | 001916b8-3d30-4935-a5d1-8eaddb1646cd | 1 |
| 4 | 0022073f-cec8-42ec-ab5f-bc2314649235 | 2 |
| ... | ... | ... |
| 6007 | ffa424d2-6e6b-4eed-93ab-7551e8941215 | 2 |
| 6008 | ffae40ab-fcfe-4311-a74a-89f605dba48b | 1 |
| 6009 | ffd787b6-59ca-48cb-bd15-bcedd52cf37c | 2 |
| 6010 | fff0b503-72a5-446a-843d-f3d152e39053 | 1 |
| 6011 | fffb2395-8edd-4954-8a89-ffe2fd329be3 | 2 |
6012 rows × 2 columns
data_df = b_boxes.merge(b_boxes, on = 'patientId', how = 'left')
print('Number of unique patient IDs in the dataset: {}'.format(len(b_boxes)))
print('\nNumber of patient IDs per b_boxes in the train dataset')
(b_boxes.groupby('number_of_boxes')
.size()
.to_frame('number_of_patientIDs_per_boxes')
.reset_index()
.set_index('number_of_boxes')
.sort_values(by = 'number_of_boxes'))
Number of unique patient IDs in the dataset: 6012 Number of patient IDs per b_boxes in the train dataset
| number_of_patientIDs_per_boxes | |
|---|---|
| number_of_boxes | |
| 1 | 2614 |
| 2 | 3266 |
| 3 | 119 |
| 4 | 13 |
#sorting both the datasets based on patientId
data.sort_values('patientId', inplace = True)
det_class.sort_values('patientId', inplace = True)
#concatenating the data labels and class labels file for model training
train_data = pd.concat([data, det_class['class']], axis = 1, sort = False)
print('The merged dataset has {} rows and {} columns and looks like:'.format(train_data.shape[0], train_data.shape[1]))
train_data.head()
The merged dataset has 30227 rows and 7 columns and looks like:
| patientId | x | y | width | height | Target | class | |
|---|---|---|---|---|---|---|---|
| 0 | 0004cfab-14fd-4e49-80ba-63a80b6bddd6 | NaN | NaN | NaN | NaN | 0 | No Lung Opacity / Not Normal |
| 28989 | 000924cf-0f8d-42bd-9158-1af53881a557 | NaN | NaN | NaN | NaN | 0 | Normal |
| 28990 | 000db696-cf54-4385-b10b-6b16fbb3f985 | 316.0 | 318.0 | 170.0 | 478.0 | 1 | Lung Opacity |
| 28991 | 000db696-cf54-4385-b10b-6b16fbb3f985 | 660.0 | 375.0 | 146.0 | 402.0 | 1 | Lung Opacity |
| 28992 | 000fe35a-2649-43d4-b027-e67796d412e0 | 570.0 | 282.0 | 269.0 | 409.0 | 1 | Lung Opacity |
# analysing the stage 2 train images (DICOM format)
import pydicom as dcm
import math
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import matplotlib.patches as patches
def inspectImages(data):
img_data = list(data.T.to_dict().values())
f, ax = plt.subplots(3,3, figsize=(16,18))
for i,data_row in enumerate(img_data):
patientImage = data_row['patientId']
dcm_file = 'stage_2_train_images/'+'{}.dcm'.format(patientImage)
data_row_img_data = dcm.read_file(dcm_file)
modality = data_row_img_data.Modality
age = data_row_img_data.PatientAge
sex = data_row_img_data.PatientSex
data_row_img = dcm.dcmread(dcm_file)
ax[i//3, i%3].imshow(data_row_img.pixel_array, cmap=plt.cm.bone)
ax[i//3, i%3].axis('off')
ax[i//3, i%3].set_title('ID: {}\nModality: {} Age: {} Sex: {} Target: {}\nClass: {}\Bounds: {}:{}:{}:{}'.format(
data_row['patientId'],
modality, age, sex, data_row['Target'], data_row['class'],
data_row['x'],data_row['y'],data_row['width'],data_row['height']))
label = data_row["class"]
if not math.isnan(data_row['x']):
x, y, width, height = data_row['x'],data_row['y'],data_row['width'],data_row['height']
rect = patches.Rectangle((x, y),width, height,
linewidth = 2,
edgecolor = 'r',
facecolor = 'none')
# Draw the bounding box on top of the image
ax[i//3, i%3].add_patch(rect)
plt.show()
## checking few images which has pneumonia
inspectImages(train_data[train_data['Target']==1].sample(9))
## checking few images which does not have pneumonia
inspectImages(train_data[train_data['Target']==0].sample(9))
%%time
## DCIM image also being included with the meta data,
## Function to read the dcim data and appending to the resultset
def readDCIMData(rowData):
dcm_file = 'stage_2_train_images/'+'{}.dcm'.format(rowData.patientId)
dcm_data = dcm.read_file(dcm_file)
img = dcm_data.pixel_array
return dcm_data.PatientSex,dcm_data.PatientAge
CPU times: user 3 µs, sys: 1 µs, total: 4 µs Wall time: 7.15 µs
%%time
## Reading the image data and append it to the training_data dataset
train_data['sex'], train_data['age'] = zip(*train_data.apply(readDCIMData, axis=1))
CPU times: user 1min 34s, sys: 7.11 s, total: 1min 41s Wall time: 1min 51s
train_data.info()
<class 'pandas.core.frame.DataFrame'> Int64Index: 30227 entries, 0 to 28988 Data columns (total 9 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 patientId 30227 non-null object 1 x 9555 non-null float64 2 y 9555 non-null float64 3 width 9555 non-null float64 4 height 9555 non-null float64 5 Target 30227 non-null int64 6 class 30227 non-null object 7 sex 30227 non-null object 8 age 30227 non-null object dtypes: float64(4), int64(1), object(4) memory usage: 2.3+ MB
# Converting age to Numeric as the current data type is a String
train_data['age'] = train_data.age.astype(int)
train_data.describe(include="all")
| patientId | x | y | width | height | Target | class | sex | age | |
|---|---|---|---|---|---|---|---|---|---|
| count | 30227 | 9555.000000 | 9555.000000 | 9555.000000 | 9555.000000 | 30227.000000 | 30227 | 30227 | 30227.000000 |
| unique | 26684 | NaN | NaN | NaN | NaN | NaN | 3 | 2 | NaN |
| top | 76f71a93-8105-4c79-a010-0cfa86f0061a | NaN | NaN | NaN | NaN | NaN | No Lung Opacity / Not Normal | M | NaN |
| freq | 4 | NaN | NaN | NaN | NaN | NaN | 11821 | 17216 | NaN |
| mean | NaN | 394.047724 | 366.839560 | 218.471376 | 329.269702 | 0.316108 | NaN | NaN | 46.797764 |
| std | NaN | 204.574172 | 148.940488 | 59.289475 | 157.750755 | 0.464963 | NaN | NaN | 16.892940 |
| min | NaN | 2.000000 | 2.000000 | 40.000000 | 45.000000 | 0.000000 | NaN | NaN | 1.000000 |
| 25% | NaN | 207.000000 | 249.000000 | 177.000000 | 203.000000 | 0.000000 | NaN | NaN | 34.000000 |
| 50% | NaN | 324.000000 | 365.000000 | 217.000000 | 298.000000 | 0.000000 | NaN | NaN | 49.000000 |
| 75% | NaN | 594.000000 | 478.500000 | 259.000000 | 438.000000 | 1.000000 | NaN | NaN | 59.000000 |
| max | NaN | 835.000000 | 881.000000 | 528.000000 | 942.000000 | 1.000000 | NaN | NaN | 155.000000 |
## Distribution of Target
label_count=train_data['Target'].value_counts()
explode = (0.01,0.01)
fig1, ax1 = plt.subplots(figsize=(4,4))
ax1.pie(label_count.values, labels=label_count.index, autopct='%1.1f%%', startangle=90)
ax1.axis('equal')
plt.title( "Distribution of Target")
plt.show()
## Distribution of Classes
class_count=train_data['class'].value_counts()
explode = (0.01,0.01,0.01)
fig1, ax1 = plt.subplots(figsize=(4,4))
ax1.pie(class_count.values, explode=explode, labels=class_count.index, autopct='%1.1f%%',
startangle=90)
ax1.axis('equal')
plt.title('Distribution of Class')
plt.show()
# insights:
# there are 39% of patients with No Lung opacity ,
# 29.3% Normal and
# the remaining are with Lung Opacity
# distribution of target and class
fig, ax = plt.subplots(nrows = 1, figsize = (8, 6))
temp = train_data.groupby('Target')['class'].value_counts()
data_target_class = pd.DataFrame(data = {'Values': temp.values}, index = temp.index).reset_index()
sns.barplot(ax = ax, x = 'Target', y = 'Values', hue = 'class', data = data_target_class, palette = 'tab10')
plt.title('Class and Target Distrubution')
Text(0.5, 1.0, 'Class and Target Distrubution')
# count of gender
train_data.sex.value_counts()
M 17216 F 13011 Name: sex, dtype: int64
# Distbution of Sex among the targets
fig, ax = plt.subplots(nrows = 1, figsize = (8, 5))
temp = train_data.groupby('Target')['sex'].value_counts()
data_target_class = pd.DataFrame(data = {'Values': temp.values}, index = temp.index).reset_index()
sns.barplot(ax = ax, x = 'Target', y = 'Values', hue = 'sex', data = data_target_class, palette = 'pastel')
plt.title('Sex and Target for Chest Exams')
Text(0.5, 1.0, 'Sex and Target for Chest Exams')
# # Distbution of Sex among the classes
fig, ax = plt.subplots(nrows = 1, figsize = (8, 5))
temp = train_data.groupby('class')['sex'].value_counts()
data_target_class = pd.DataFrame(data = {'Values': temp.values}, index = temp.index).reset_index()
sns.barplot(ax = ax, x = 'class', y = 'Values', hue = 'sex', data = data_target_class, palette = 'Set3')
plt.title('Sex and class for Chest Exams')
Text(0.5, 1.0, 'Sex and class for Chest Exams')
# plot of age distribution
# dist plot
sns.distplot(train_data.age)
<AxesSubplot:xlabel='age', ylabel='Density'>
# Distribution of PatientAge who have pneumonia
sns.distplot(train_data.loc[train_data['Target'] == 1, 'age'])
<AxesSubplot:xlabel='age', ylabel='Density'>
# Subplots
fig = plt.figure(figsize = (12, 4))
ax = fig.add_subplot(121)
g = (sns.distplot(train_data['age']).set_title('Distribution of PatientAge'))
ax = fig.add_subplot(122)
g = (sns.distplot(train_data.loc[train_data['Target'] == 1, 'age']).set_title('Distribution of PatientAge who have pneumonia'))
# distribution of Age among target
# Bar plot
sns.barplot(x='Target', y='age', data=train_data)
<AxesSubplot:xlabel='Target', ylabel='age'>
# Distribution of Age among class
# Bar plot
sns.barplot(x='class', y='age', data=train_data)
<AxesSubplot:xlabel='class', ylabel='age'>
# Box plot
plt.figure(figsize= (10,5))
sns.boxplot (x= "class", y= "age", data= train_data)
<AxesSubplot:xlabel='class', ylabel='age'>
# Insight of box plot:
# The class which has no pneuomia has few outliers , their age is somewhere around 150 years
# corrrelation matrix
corr_mat = train_data.corr()
plt.figure(figsize=(12,5))
sns.heatmap(corr_mat,annot=True)
<AxesSubplot:>
# Prepare data for Modelling:-
# We will convert the dataset to have just two classes for our ease of doing.
# Going forward Target 0 will correspond to Normal class whereas Target 1 will corresponds with Lung Opacity.
train_data_df = train_data.copy()
#convert the dataset into two classes only:
train_data_df['class'].replace('No Lung Opacity / Not Normal', 'Normal', inplace = True)
print('The merge dataset now looks like:')
train_data_df.head()
The merge dataset now looks like:
| patientId | x | y | width | height | Target | class | sex | age | |
|---|---|---|---|---|---|---|---|---|---|
| 0 | 0004cfab-14fd-4e49-80ba-63a80b6bddd6 | NaN | NaN | NaN | NaN | 0 | Normal | F | 51 |
| 28989 | 000924cf-0f8d-42bd-9158-1af53881a557 | NaN | NaN | NaN | NaN | 0 | Normal | F | 19 |
| 28990 | 000db696-cf54-4385-b10b-6b16fbb3f985 | 316.0 | 318.0 | 170.0 | 478.0 | 1 | Lung Opacity | F | 25 |
| 28991 | 000db696-cf54-4385-b10b-6b16fbb3f985 | 660.0 | 375.0 | 146.0 | 402.0 | 1 | Lung Opacity | F | 25 |
| 28992 | 000fe35a-2649-43d4-b027-e67796d412e0 | 570.0 | 282.0 | 269.0 | 409.0 | 1 | Lung Opacity | M | 40 |
# setting all NaN values to 0 in the train_data_df datasets
# x, y, width and hight values as zero(0) means no bounding box.
train_data_df.fillna(0, inplace = True)
print('The training data now looks as: \n')
train_data_df.head()
The training data now looks as:
| patientId | x | y | width | height | Target | class | sex | age | |
|---|---|---|---|---|---|---|---|---|---|
| 0 | 0004cfab-14fd-4e49-80ba-63a80b6bddd6 | 0.0 | 0.0 | 0.0 | 0.0 | 0 | Normal | F | 51 |
| 28989 | 000924cf-0f8d-42bd-9158-1af53881a557 | 0.0 | 0.0 | 0.0 | 0.0 | 0 | Normal | F | 19 |
| 28990 | 000db696-cf54-4385-b10b-6b16fbb3f985 | 316.0 | 318.0 | 170.0 | 478.0 | 1 | Lung Opacity | F | 25 |
| 28991 | 000db696-cf54-4385-b10b-6b16fbb3f985 | 660.0 | 375.0 | 146.0 | 402.0 | 1 | Lung Opacity | F | 25 |
| 28992 | 000fe35a-2649-43d4-b027-e67796d412e0 | 570.0 | 282.0 | 269.0 | 409.0 | 1 | Lung Opacity | M | 40 |
#convert the dataset into two classes only:
train_data_df['class'].replace('No Lung Opacity / Not Normal', 'Normal', inplace = True)
print('The merge dataset now looks like:')
train_data_df.head()
The merge dataset now looks like:
| patientId | x | y | width | height | Target | class | sex | age | |
|---|---|---|---|---|---|---|---|---|---|
| 0 | 0004cfab-14fd-4e49-80ba-63a80b6bddd6 | 0.0 | 0.0 | 0.0 | 0.0 | 0 | Normal | F | 51 |
| 28989 | 000924cf-0f8d-42bd-9158-1af53881a557 | 0.0 | 0.0 | 0.0 | 0.0 | 0 | Normal | F | 19 |
| 28990 | 000db696-cf54-4385-b10b-6b16fbb3f985 | 316.0 | 318.0 | 170.0 | 478.0 | 1 | Lung Opacity | F | 25 |
| 28991 | 000db696-cf54-4385-b10b-6b16fbb3f985 | 660.0 | 375.0 | 146.0 | 402.0 | 1 | Lung Opacity | F | 25 |
| 28992 | 000fe35a-2649-43d4-b027-e67796d412e0 | 570.0 | 282.0 | 269.0 | 409.0 | 1 | Lung Opacity | M | 40 |
train_data_df.info()
<class 'pandas.core.frame.DataFrame'> Int64Index: 30227 entries, 0 to 28988 Data columns (total 9 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 patientId 30227 non-null object 1 x 30227 non-null float64 2 y 30227 non-null float64 3 width 30227 non-null float64 4 height 30227 non-null float64 5 Target 30227 non-null int64 6 class 30227 non-null object 7 sex 30227 non-null object 8 age 30227 non-null int64 dtypes: float64(4), int64(2), object(3) memory usage: 2.3+ MB
train_data_df['class'].value_counts()
Normal 20672 Lung Opacity 9555 Name: class, dtype: int64
train_sample = train_data_df.copy
train_data_df.head()
| patientId | x | y | width | height | Target | class | sex | age | |
|---|---|---|---|---|---|---|---|---|---|
| 0 | 0004cfab-14fd-4e49-80ba-63a80b6bddd6 | 0.0 | 0.0 | 0.0 | 0.0 | 0 | Normal | F | 51 |
| 28989 | 000924cf-0f8d-42bd-9158-1af53881a557 | 0.0 | 0.0 | 0.0 | 0.0 | 0 | Normal | F | 19 |
| 28990 | 000db696-cf54-4385-b10b-6b16fbb3f985 | 316.0 | 318.0 | 170.0 | 478.0 | 1 | Lung Opacity | F | 25 |
| 28991 | 000db696-cf54-4385-b10b-6b16fbb3f985 | 660.0 | 375.0 | 146.0 | 402.0 | 1 | Lung Opacity | F | 25 |
| 28992 | 000fe35a-2649-43d4-b027-e67796d412e0 | 570.0 | 282.0 | 269.0 | 409.0 | 1 | Lung Opacity | M | 40 |
train_data_df['class'].value_counts()
Normal 20672 Lung Opacity 9555 Name: class, dtype: int64
## Let's visualize a handful of positive and negative examples.
## The function below plots five images.
def plot_five_images(images):
"""
As the name implies, plots five images
in a single row
Args:
images: list or np.array containing images
Returns: None
"""
# Establish an image index
a = 1
# Instantiate the plot
fig = plt.figure(figsize=(15,15))
# Plot the images
for image in images:
plt.subplot(1, 5, a)
plt.imshow(image)
plt.axis('off')
a += 1
plt.show()
#model loss
def display_loss_accuracy(x):
'''Takes in the models history and returns a plot of the history of the log loss and the accuracy'''
plt.figure(figsize=(12,4))
plt.subplot(1,2,1)
plt.plot(x.history['loss'])
plt.plot(x.history['val_loss'])
plt.title('Model Loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Training Loss', 'Validation Loss'], loc='upper left')
plt.grid(True)
plt.subplot(1,2,2)
plt.plot(x.history['accuracy'])
plt.plot(x.history['val_accuracy'])
plt.title('Model Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Training Accuracy', 'Test Accuracy'], loc='upper left')
plt.grid(True)
plt.tight_layout()
return plt.show()
from sklearn.metrics import f1_score, roc_curve, roc_auc_score, confusion_matrix, classification_report
from sklearn.metrics import ConfusionMatrixDisplay
def evaluate_model(model, test_x, test_y, threshold=0.5):
"""
Applies a series of evaluation tools to generate
a report of the model's performance.
Args:
model: tensorflow.python.keras.engine.training.Model
instance trained on data
test_x: np.array representing the test set
test_y: np.array of test labels
threshold: float giving the binary prediction threshold
Returns:
None, but displays evaluation output
"""
# Predict from the test set
y_probs = model.predict(x=X_test)
y_probs = y_probs.reshape((len(y_probs),))
# Convert to binary probabilities
y_hat = y_probs > threshold
y_hat = y_hat.reshape(len(y_hat),)
# Plot the confusion matrix
print('----------------------Confusion Matrix---------------------\n')
cm = ConfusionMatrixDisplay.from_predictions(y_test, y_hat,cmap='viridis')
# print(cm)
plt.show(cm)
# Print the classification report
print('\n\n-----------------Classification Report-----------------\n')
print(classification_report(y_test, y_hat))
# Display the ROC curve
print('\n\n-----------------------ROC Curve-----------------------\n')
fpr, tpr, thresholds = roc_curve(test_y, y_probs)
roc_auc = roc_auc_score(y_test, y_probs)
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
plt.show()
images = []
ADJUSTED_IMAGE_SIZE = 128
imageList = []
classLabels = []
labels = []
originalImage = []
# Function to read the image from the path and reshape the image to size
def readAndReshapeImage(image):
img = np.array(image).astype(np.uint8)
## Resize the image
res = cv2.resize(img,(ADJUSTED_IMAGE_SIZE,ADJUSTED_IMAGE_SIZE), interpolation = cv2.INTER_LINEAR)
return res
## Read the imahge and resize the image
def populateImage(rowData):
for index, row in rowData.iterrows():
patientId = row.patientId
classlabel = row["class"]
dcm_file = 'stage_2_train_images/'+'{}.dcm'.format(patientId)
dcm_data = dcm.read_file(dcm_file)
img = dcm_data.pixel_array
## Converting the image to 3 channels as the dicom image pixel does not have colour classes with it
if len(img.shape) != 3 or img.shape[2] != 3:
img = np.stack((img,) * 3, -1)
imageList.append(readAndReshapeImage(img))
# originalImage.append(img)
classLabels.append(classlabel)
tmpImages = np.array(imageList)
tmpLabels = np.array(classLabels)
# originalImages = np.array(originalImage)
return tmpImages,tmpLabels
%%time
# Reading the images into numpy array
images,labels = populateImage(train_data_df)
CPU times: user 2min 9s, sys: 2.59 s, total: 2min 12s Wall time: 2min 12s
images.shape , labels.shape
((30227, 128, 128, 3), (30227,))
## Checking one of the converted image
plt.imshow(images[1200])
<matplotlib.image.AxesImage at 0x17e1b0ee0>
## check the unique labels
np.unique(labels),len(np.unique(labels))
(array(['Lung Opacity', 'Normal'], dtype='<U12'), 2)
## encoding the labels
from sklearn.preprocessing import LabelBinarizer
lb = LabelBinarizer()
label = lb.fit_transform(labels)
## splitting into train ,test and validation data
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(images, label, test_size=0.2,shuffle = True, random_state=22, stratify=label)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.25,shuffle = True, random_state=22, stratify=y_train)
X_train = X_train/255
X_test = X_test/255
X_val = X_val/255
values, counts = np.unique(y_train, return_counts=True)
values1, counts1 = np.unique(y_test, return_counts=True)
values2, counts2 = np.unique(y_val, return_counts=True)
print("Train stratify label:",values)
print("Train stratify label counts:",counts)
print("Test stratify label:",values1)
print("Test stratify label counts:",counts1)
print("Validation stratify label:",values2)
print("Validation stratify label counts:",counts2)
Train stratify label: [0 1] Train stratify label counts: [ 5733 12402] Test stratify label: [0 1] Test stratify label counts: [1911 4135] Validation stratify label: [0 1] Validation stratify label counts: [1911 4135]
percent_positive1 = y_train.sum() / len(y_train)
print('Percent of train images that are positive - %.3f' %percent_positive1)
percent_positive2 = y_test.sum() / len(y_test)
print('Percent of test images that are positive - %.3f' %percent_positive2)
percent_positive3 = y_val.sum() / len(y_val)
print('Percent of test images that are positive - %.3f' %percent_positive3)
Percent of train images that are positive - 0.684 Percent of test images that are positive - 0.684 Percent of test images that are positive - 0.684
## Function to create a dataframe for results
def createResult(name,accuracy,testscore):
result = pd.DataFrame({'Method':[name], 'Accuracy': [accuracy] ,'Test Score':[testscore]})
return result
%%time
# Basic model
batch_size = 100
epochs = 30
random_state=22
model1= Sequential()
model1.add(Conv2D(32,(3,3),input_shape=(128,128,3),activation='relu'))
model1.add(MaxPooling2D(2,2))
model1.add(Conv2D(64,(3,3),activation='relu'))
model1.add(MaxPooling2D(2,2))
model1.add(Dropout(0.5))
model1.add(Conv2D(128,(3,3),activation='relu'))
model1.add(MaxPooling2D(2,2))
model1.add(Dropout(0.5))
model1.add(Conv2D(256,(3,3),activation='relu'))
model1.add(MaxPooling2D(2,2))
model1.add(Flatten())
model1.add(Dropout(0.5))
model1.add(Dense(256,activation='relu'))
model1.add(Dense(1,activation='sigmoid'))
# compiling the model
model1.compile(optimizer=Adam(lr=0.001),loss='binary_crossentropy',metrics=['accuracy'])
# Fitting the model
history1 = model1.fit(X_train,
y_train,
epochs = epochs,
validation_data =(X_val,y_val),
batch_size = batch_size,verbose = 1)
Metal device set to: Apple M1 Pro
2022-08-26 12:02:34.056820: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support. 2022-08-26 12:02:34.057011: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
Epoch 1/30
2022-08-26 12:02:53.406589: W tensorflow/core/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz 2022-08-26 12:02:53.964874: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
182/182 [==============================] - ETA: 0s - loss: 0.5875 - accuracy: 0.6953
2022-08-26 12:03:15.270826: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
182/182 [==============================] - 24s 125ms/step - loss: 0.5875 - accuracy: 0.6953 - val_loss: 0.5039 - val_accuracy: 0.7620 Epoch 2/30 182/182 [==============================] - 18s 99ms/step - loss: 0.4975 - accuracy: 0.7591 - val_loss: 0.4883 - val_accuracy: 0.7721 Epoch 3/30 182/182 [==============================] - 18s 99ms/step - loss: 0.4818 - accuracy: 0.7690 - val_loss: 0.4715 - val_accuracy: 0.7746 Epoch 4/30 182/182 [==============================] - 18s 99ms/step - loss: 0.4766 - accuracy: 0.7766 - val_loss: 0.4789 - val_accuracy: 0.7747 Epoch 5/30 182/182 [==============================] - 18s 99ms/step - loss: 0.4704 - accuracy: 0.7787 - val_loss: 0.4618 - val_accuracy: 0.7785 Epoch 6/30 182/182 [==============================] - 18s 99ms/step - loss: 0.4668 - accuracy: 0.7785 - val_loss: 0.4565 - val_accuracy: 0.7775 Epoch 7/30 182/182 [==============================] - 18s 100ms/step - loss: 0.4641 - accuracy: 0.7839 - val_loss: 0.4595 - val_accuracy: 0.7807 Epoch 8/30 182/182 [==============================] - 18s 99ms/step - loss: 0.4570 - accuracy: 0.7889 - val_loss: 0.4692 - val_accuracy: 0.7828 Epoch 9/30 182/182 [==============================] - 18s 99ms/step - loss: 0.4526 - accuracy: 0.7920 - val_loss: 0.4471 - val_accuracy: 0.7873 Epoch 10/30 182/182 [==============================] - 18s 99ms/step - loss: 0.4461 - accuracy: 0.7946 - val_loss: 0.4491 - val_accuracy: 0.7851 Epoch 11/30 182/182 [==============================] - 18s 99ms/step - loss: 0.4422 - accuracy: 0.7964 - val_loss: 0.4422 - val_accuracy: 0.7870 Epoch 12/30 182/182 [==============================] - 18s 99ms/step - loss: 0.4405 - accuracy: 0.7971 - val_loss: 0.4398 - val_accuracy: 0.7918 Epoch 13/30 182/182 [==============================] - 18s 99ms/step - loss: 0.4333 - accuracy: 0.8017 - val_loss: 0.4312 - val_accuracy: 0.7946 Epoch 14/30 182/182 [==============================] - 18s 99ms/step - loss: 0.4351 - accuracy: 0.8006 - val_loss: 0.4371 - val_accuracy: 0.7944 Epoch 15/30 182/182 [==============================] - 18s 99ms/step - loss: 0.4255 - accuracy: 0.8033 - val_loss: 0.4291 - val_accuracy: 0.7972 Epoch 16/30 182/182 [==============================] - 18s 99ms/step - loss: 0.4237 - accuracy: 0.8050 - val_loss: 0.4304 - val_accuracy: 0.7989 Epoch 17/30 182/182 [==============================] - 18s 99ms/step - loss: 0.4213 - accuracy: 0.8096 - val_loss: 0.4224 - val_accuracy: 0.8043 Epoch 18/30 182/182 [==============================] - 18s 99ms/step - loss: 0.4168 - accuracy: 0.8049 - val_loss: 0.4213 - val_accuracy: 0.8038 Epoch 19/30 182/182 [==============================] - 18s 99ms/step - loss: 0.4144 - accuracy: 0.8076 - val_loss: 0.4218 - val_accuracy: 0.8023 Epoch 20/30 182/182 [==============================] - 18s 100ms/step - loss: 0.4103 - accuracy: 0.8104 - val_loss: 0.4261 - val_accuracy: 0.8019 Epoch 21/30 182/182 [==============================] - 18s 99ms/step - loss: 0.4029 - accuracy: 0.8165 - val_loss: 0.4219 - val_accuracy: 0.8055 Epoch 22/30 182/182 [==============================] - 18s 99ms/step - loss: 0.3984 - accuracy: 0.8159 - val_loss: 0.4214 - val_accuracy: 0.7966 Epoch 23/30 182/182 [==============================] - 18s 99ms/step - loss: 0.3993 - accuracy: 0.8149 - val_loss: 0.4150 - val_accuracy: 0.8078 Epoch 24/30 182/182 [==============================] - 18s 99ms/step - loss: 0.3901 - accuracy: 0.8200 - val_loss: 0.4121 - val_accuracy: 0.8081 Epoch 25/30 182/182 [==============================] - 18s 99ms/step - loss: 0.3841 - accuracy: 0.8239 - val_loss: 0.4137 - val_accuracy: 0.8086 Epoch 26/30 182/182 [==============================] - 18s 99ms/step - loss: 0.3814 - accuracy: 0.8280 - val_loss: 0.4329 - val_accuracy: 0.8062 Epoch 27/30 182/182 [==============================] - 18s 99ms/step - loss: 0.3820 - accuracy: 0.8250 - val_loss: 0.4131 - val_accuracy: 0.8114 Epoch 28/30 182/182 [==============================] - 18s 99ms/step - loss: 0.3703 - accuracy: 0.8319 - val_loss: 0.4142 - val_accuracy: 0.8119 Epoch 29/30 182/182 [==============================] - 18s 99ms/step - loss: 0.3712 - accuracy: 0.8314 - val_loss: 0.4234 - val_accuracy: 0.8073 Epoch 30/30 182/182 [==============================] - 18s 99ms/step - loss: 0.3654 - accuracy: 0.8345 - val_loss: 0.4161 - val_accuracy: 0.8114 CPU times: user 1min 19s, sys: 6min 47s, total: 8min 6s Wall time: 9min 27s
# Visualisation of training and validation accuracy Vs their loss
train1_acc = history1.history['accuracy']
display_loss_accuracy(history1)
# Insights of basic model with 2 classes:
# there has been dip in the traning loss but validation loss has no much significant dip in loss
# Validation accuracy went up to 81% approx
# while training accuracy was 83% and test accuracy was 81%
# Evaluating the accuracy
model1_test_loss, model1_test_acc = model1.evaluate(X_test, y_test, verbose=1)
print('Test loss:', model1_test_loss)
print('Test accuracy:',model1_test_acc)
189/189 [==============================] - 5s 24ms/step - loss: 0.4157 - accuracy: 0.8134 Test loss: 0.41569533944129944 Test accuracy: 0.8134303689002991
resultDF1 = createResult("CNN model with 2 classes"
,round(train1_acc[-1]*100,2),round(model1_test_acc*100,2))
resultDF1
| Method | Accuracy | Test Score | |
|---|---|---|---|
| 0 | CNN model with 2 classes | 83.45 | 81.34 |
evaluate_model(model1, X_test, y_test)
3/189 [..............................] - ETA: 4s
2022-08-26 12:15:34.436661: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
189/189 [==============================] - 4s 20ms/step ----------------------Confusion Matrix---------------------
-----------------Classification Report-----------------
precision recall f1-score support
0 0.73 0.66 0.69 1911
1 0.85 0.89 0.87 4135
accuracy 0.81 6046
macro avg 0.79 0.77 0.78 6046
weighted avg 0.81 0.81 0.81 6046
-----------------------ROC Curve-----------------------
# Predict from the test set
threshold = 0.5
# Predict from the test set
y_prob1 = model1.predict(x=X_test)
y_prob1 = y_prob1.reshape((len(y_prob1),))
# Convert to binary probabilities
y_hat1 = y_prob1 > threshold
y_hat1 = y_hat1.reshape(len(y_hat1),)
reportData = classification_report(y_test, y_hat1,output_dict=True)
for data in reportData:
if(data == '-1' or data == '1'):
if(type(reportData[data]) is dict):
for subData in reportData[data]:
resultDF1[data+"_"+subData] = reportData[data][subData]
resultDF1['1_precision'] =round(resultDF1['1_precision']*100,2)
resultDF1['1_recall'] =round(resultDF1['1_recall']*100,2)
resultDF1['1_f1-score'] =round(resultDF1['1_f1-score']*100,2)
resultDF1
189/189 [==============================] - 4s 20ms/step
| Method | Accuracy | Test Score | 1_precision | 1_recall | 1_f1-score | 1_support | |
|---|---|---|---|---|---|---|---|
| 0 | CNN model with 2 classes | 83.45 | 81.34 | 84.83 | 88.56 | 86.65 | 4135 |
train_data.head()
| patientId | x | y | width | height | Target | class | sex | age | |
|---|---|---|---|---|---|---|---|---|---|
| 0 | 0004cfab-14fd-4e49-80ba-63a80b6bddd6 | NaN | NaN | NaN | NaN | 0 | No Lung Opacity / Not Normal | F | 51 |
| 28989 | 000924cf-0f8d-42bd-9158-1af53881a557 | NaN | NaN | NaN | NaN | 0 | Normal | F | 19 |
| 28990 | 000db696-cf54-4385-b10b-6b16fbb3f985 | 316.0 | 318.0 | 170.0 | 478.0 | 1 | Lung Opacity | F | 25 |
| 28991 | 000db696-cf54-4385-b10b-6b16fbb3f985 | 660.0 | 375.0 | 146.0 | 402.0 | 1 | Lung Opacity | F | 25 |
| 28992 | 000fe35a-2649-43d4-b027-e67796d412e0 | 570.0 | 282.0 | 269.0 | 409.0 | 1 | Lung Opacity | M | 40 |
# making a copy of the train data
sample_traindata = train_data.copy()
## Checking the training data set with class distbution
sample_traindata["class"].value_counts()
No Lung Opacity / Not Normal 11821 Lung Opacity 9555 Normal 8851 Name: class, dtype: int64
sample_traindata.head()
| patientId | x | y | width | height | Target | class | sex | age | |
|---|---|---|---|---|---|---|---|---|---|
| 0 | 0004cfab-14fd-4e49-80ba-63a80b6bddd6 | NaN | NaN | NaN | NaN | 0 | No Lung Opacity / Not Normal | F | 51 |
| 28989 | 000924cf-0f8d-42bd-9158-1af53881a557 | NaN | NaN | NaN | NaN | 0 | Normal | F | 19 |
| 28990 | 000db696-cf54-4385-b10b-6b16fbb3f985 | 316.0 | 318.0 | 170.0 | 478.0 | 1 | Lung Opacity | F | 25 |
| 28991 | 000db696-cf54-4385-b10b-6b16fbb3f985 | 660.0 | 375.0 | 146.0 | 402.0 | 1 | Lung Opacity | F | 25 |
| 28992 | 000fe35a-2649-43d4-b027-e67796d412e0 | 570.0 | 282.0 | 269.0 | 409.0 | 1 | Lung Opacity | M | 40 |
sample_traindata.fillna(0, inplace = True)
print('The training data now looks as: \n')
sample_traindata.head()
The training data now looks as:
| patientId | x | y | width | height | Target | class | sex | age | |
|---|---|---|---|---|---|---|---|---|---|
| 0 | 0004cfab-14fd-4e49-80ba-63a80b6bddd6 | 0.0 | 0.0 | 0.0 | 0.0 | 0 | No Lung Opacity / Not Normal | F | 51 |
| 28989 | 000924cf-0f8d-42bd-9158-1af53881a557 | 0.0 | 0.0 | 0.0 | 0.0 | 0 | Normal | F | 19 |
| 28990 | 000db696-cf54-4385-b10b-6b16fbb3f985 | 316.0 | 318.0 | 170.0 | 478.0 | 1 | Lung Opacity | F | 25 |
| 28991 | 000db696-cf54-4385-b10b-6b16fbb3f985 | 660.0 | 375.0 | 146.0 | 402.0 | 1 | Lung Opacity | F | 25 |
| 28992 | 000fe35a-2649-43d4-b027-e67796d412e0 | 570.0 | 282.0 | 269.0 | 409.0 | 1 | Lung Opacity | M | 40 |
sampleimages = []
ADJUSTED_IMAGE_SIZE = 128
imageList = []
classLabels = []
samplelabels = []
originalImage = []
# Function to read the image from the path and reshape the image to size
def readAndReshapeImage(image):
img = np.array(image).astype(np.uint8)
## Resize the image
res = cv2.resize(img,(ADJUSTED_IMAGE_SIZE,ADJUSTED_IMAGE_SIZE), interpolation = cv2.INTER_LINEAR)
return res
## Read the imahge and resize the image
def populateImage(rowData):
for index, row in rowData.iterrows():
patientId = row.patientId
classlabel = row["class"]
dcm_file = 'stage_2_train_images/'+'{}.dcm'.format(patientId)
dcm_data = dcm.read_file(dcm_file)
img = dcm_data.pixel_array
## Converting the image to 3 channels as the dicom image pixel does not have colour classes wiht it
if len(img.shape) != 3 or img.shape[2] != 3:
img = np.stack((img,) * 3, -1)
imageList.append(readAndReshapeImage(img))
# originalImage.append(img)
classLabels.append(classlabel)
tmpImages = np.array(imageList)
tmpLabels = np.array(classLabels)
# originalImages = np.array(originalImage)
return tmpImages,tmpLabels
%%time
sampleimages,samplelabels = populateImage(sample_traindata)
CPU times: user 2min 12s, sys: 7.69 s, total: 2min 20s Wall time: 2min 30s
sampleimages.shape , samplelabels.shape
((30227, 128, 128, 3), (30227,))
## Checking one of the converted image
plt.imshow(sampleimages[1200])
<matplotlib.image.AxesImage at 0x29f532e50>
## check the unique labels
np.unique(samplelabels),len(np.unique(samplelabels))
(array(['Lung Opacity', 'No Lung Opacity / Not Normal', 'Normal'],
dtype='<U28'),
3)
## encoding the labels
from sklearn.preprocessing import LabelBinarizer
lb = LabelBinarizer()
ysample = lb.fit_transform(samplelabels)
# Train test split for 3 class dataset
X_train_c, X_test_c, y_train_c, y_test_c = train_test_split(sampleimages, ysample, test_size=0.2, shuffle = True,random_state=22, stratify=ysample)
X_train_c, X_val_c, y_train_c, y_val_c = train_test_split(X_train_c, y_train_c, test_size=0.25, shuffle = True,random_state=22 ,stratify=y_train_c)
X_train_c = X_train_c/255
X_test_c = X_test_c/255
X_val_c = X_val_c/255
%%time
# Basic CNN model: used 32 filters with kernel size (3, 3) followed by 64, 128 and 256 filters with same kernel size
# with maximum pooling and drop outs
# finally used softmax activation layer
random_state=22
model2= Sequential()
model2.add(Conv2D(32,(3,3),input_shape=(128,128,3),activation='relu'))
model2.add(MaxPooling2D(2,2))
model2.add(Conv2D(64,(3,3),activation='relu'))
model2.add(MaxPooling2D(2,2))
model2.add(Dropout(0.5))
model2.add(Conv2D(128,(3,3),activation='relu'))
model2.add(MaxPooling2D(2,2))
model2.add(Dropout(0.5))
model2.add(Conv2D(256,(3,3),activation='relu'))
model2.add(MaxPooling2D(2,2))
model2.add(Flatten())
model2.add(Dropout(0.5))
model2.add(Dense(256,activation='relu'))
model2.add(Dense(3,activation='softmax'))
# compiling the model
model2.compile(optimizer=Adam(lr=0.001),loss='categorical_crossentropy',metrics=['accuracy'])
# Fitting the model
history2 = model2.fit(X_train_c,
y_train_c,
epochs = epochs,
validation_data =(X_val_c,y_val_c),
batch_size = batch_size,verbose = 1)
Epoch 1/30
2022-08-26 12:20:54.502535: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
182/182 [==============================] - ETA: 0s - loss: 1.0071 - accuracy: 0.4710
2022-08-26 12:21:23.014069: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
182/182 [==============================] - 33s 177ms/step - loss: 1.0071 - accuracy: 0.4710 - val_loss: 0.9320 - val_accuracy: 0.5471 Epoch 2/30 182/182 [==============================] - 27s 147ms/step - loss: 0.8900 - accuracy: 0.5642 - val_loss: 0.8931 - val_accuracy: 0.5814 Epoch 3/30 182/182 [==============================] - 27s 148ms/step - loss: 0.8630 - accuracy: 0.5834 - val_loss: 0.8813 - val_accuracy: 0.5657 Epoch 4/30 182/182 [==============================] - 27s 146ms/step - loss: 0.8460 - accuracy: 0.5915 - val_loss: 0.8713 - val_accuracy: 0.5890 Epoch 5/30 182/182 [==============================] - 134s 738ms/step - loss: 0.8338 - accuracy: 0.5965 - val_loss: 0.8689 - val_accuracy: 0.5736 Epoch 6/30 182/182 [==============================] - 27s 146ms/step - loss: 0.8175 - accuracy: 0.6124 - val_loss: 0.8391 - val_accuracy: 0.6030 Epoch 7/30 182/182 [==============================] - 27s 146ms/step - loss: 0.8025 - accuracy: 0.6192 - val_loss: 0.8298 - val_accuracy: 0.6016 Epoch 8/30 182/182 [==============================] - 26s 145ms/step - loss: 0.7895 - accuracy: 0.6226 - val_loss: 0.8101 - val_accuracy: 0.6080 Epoch 9/30 182/182 [==============================] - 27s 147ms/step - loss: 0.7805 - accuracy: 0.6293 - val_loss: 0.8112 - val_accuracy: 0.6095 Epoch 10/30 182/182 [==============================] - 26s 145ms/step - loss: 0.7708 - accuracy: 0.6349 - val_loss: 0.8110 - val_accuracy: 0.6115 Epoch 11/30 182/182 [==============================] - 27s 146ms/step - loss: 0.7585 - accuracy: 0.6363 - val_loss: 0.7706 - val_accuracy: 0.6305 Epoch 12/30 182/182 [==============================] - 26s 145ms/step - loss: 0.7563 - accuracy: 0.6413 - val_loss: 0.7735 - val_accuracy: 0.6232 Epoch 13/30 182/182 [==============================] - 27s 146ms/step - loss: 0.7495 - accuracy: 0.6469 - val_loss: 0.7889 - val_accuracy: 0.6141 Epoch 14/30 182/182 [==============================] - 27s 146ms/step - loss: 0.7398 - accuracy: 0.6522 - val_loss: 0.7727 - val_accuracy: 0.6318 Epoch 15/30 182/182 [==============================] - 27s 146ms/step - loss: 0.7354 - accuracy: 0.6582 - val_loss: 0.7838 - val_accuracy: 0.6234 Epoch 16/30 182/182 [==============================] - 26s 146ms/step - loss: 0.7274 - accuracy: 0.6550 - val_loss: 0.7442 - val_accuracy: 0.6457 Epoch 17/30 182/182 [==============================] - 27s 147ms/step - loss: 0.7238 - accuracy: 0.6613 - val_loss: 0.7441 - val_accuracy: 0.6452 Epoch 18/30 182/182 [==============================] - 27s 146ms/step - loss: 0.7187 - accuracy: 0.6631 - val_loss: 0.8054 - val_accuracy: 0.6092 Epoch 19/30 182/182 [==============================] - 27s 147ms/step - loss: 0.7138 - accuracy: 0.6676 - val_loss: 0.7754 - val_accuracy: 0.6254 Epoch 20/30 182/182 [==============================] - 26s 146ms/step - loss: 0.7021 - accuracy: 0.6726 - val_loss: 0.7482 - val_accuracy: 0.6456 Epoch 21/30 182/182 [==============================] - 27s 146ms/step - loss: 0.6954 - accuracy: 0.6755 - val_loss: 0.7486 - val_accuracy: 0.6399 Epoch 22/30 182/182 [==============================] - 102s 563ms/step - loss: 0.6958 - accuracy: 0.6739 - val_loss: 0.7579 - val_accuracy: 0.6376 Epoch 23/30 182/182 [==============================] - 27s 147ms/step - loss: 0.6894 - accuracy: 0.6797 - val_loss: 0.7575 - val_accuracy: 0.6399 Epoch 24/30 182/182 [==============================] - 26s 146ms/step - loss: 0.6787 - accuracy: 0.6874 - val_loss: 0.7517 - val_accuracy: 0.6409 Epoch 25/30 182/182 [==============================] - 27s 146ms/step - loss: 0.6734 - accuracy: 0.6911 - val_loss: 0.7716 - val_accuracy: 0.6307 Epoch 26/30 182/182 [==============================] - 27s 146ms/step - loss: 0.6699 - accuracy: 0.6919 - val_loss: 0.7299 - val_accuracy: 0.6492 Epoch 27/30 182/182 [==============================] - 27s 146ms/step - loss: 0.6619 - accuracy: 0.6980 - val_loss: 0.7250 - val_accuracy: 0.6551 Epoch 28/30 182/182 [==============================] - 27s 146ms/step - loss: 0.6637 - accuracy: 0.6945 - val_loss: 0.7612 - val_accuracy: 0.6393 Epoch 29/30 182/182 [==============================] - 27s 146ms/step - loss: 0.6519 - accuracy: 0.7055 - val_loss: 0.7348 - val_accuracy: 0.6555 Epoch 30/30 182/182 [==============================] - 27s 146ms/step - loss: 0.6482 - accuracy: 0.7006 - val_loss: 0.7376 - val_accuracy: 0.6489 CPU times: user 1min 35s, sys: 12min 45s, total: 14min 21s Wall time: 16min 45s
## Plotting the accuracy vs loss graph
train2_acc = history2.history['accuracy']
display_loss_accuracy(history2)
## evaluating the acuracy
model2_test_loss, model2_test_acc = model2.evaluate(X_test_c, y_test_c)
print('Test loss:', model2_test_loss)
print('Test accuracy:', model2_test_acc)
189/189 [==============================] - 5s 26ms/step - loss: 0.7521 - accuracy: 0.6480 Test loss: 0.7521150708198547 Test accuracy: 0.6480317711830139
resultDF2 = createResult("CNN model with 3 classes"
,round(train2_acc[-1]*100,2),round(model2_test_acc*100,2))
resultDF2
| Method | Accuracy | Test Score | |
|---|---|---|---|
| 0 | CNN model with 3 classes | 70.06 | 64.8 |
# confusion matrix
from sklearn.metrics import confusion_matrix
import itertools
#Class 0 ,1 and 2:
#Class 0 is Lung Opacity
#Class 1 is No Lung Opacity/Normal
#Class 2 is Normal
plt.subplots(figsize=(22,7)) #setting the size of the plot
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Predicting the values from the validation dataset
y_pred_3c = model2.predict(X_test_c)
# Converting predictions classes to one hot vectors
y_pred_classes = np.argmax(y_pred_3c,axis = 1)
# Convert validation observations to one hot vectors
y_true_c = np.argmax(y_test_c,axis = 1)
# computing the confusion matrix
confusion_mtx = confusion_matrix(y_true_c, y_pred_classes)
# plotting the confusion matrix
plot_confusion_matrix(confusion_mtx, classes = range(3))
# Print the classification report
print('\n\n-----------------Classification Report-----------------\n')
print(classification_report(y_true_c, y_pred_classes))
1/189 [..............................] - ETA: 50s
2022-08-26 12:40:26.863683: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
189/189 [==============================] - 4s 21ms/step
-----------------Classification Report-----------------
precision recall f1-score support
0 0.69 0.66 0.68 1911
1 0.61 0.48 0.54 2365
2 0.65 0.86 0.74 1770
accuracy 0.65 6046
macro avg 0.65 0.67 0.65 6046
weighted avg 0.65 0.65 0.64 6046
# the model has predicted mostly wrong in this case to the Target 0. Type 2 error
# classification report
from sklearn.metrics import recall_score, confusion_matrix, precision_score, f1_score, accuracy_score, roc_auc_score,classification_report
from sklearn.metrics import classification_report
reportData = classification_report(y_true_c, y_pred_classes,output_dict=True)
for data in reportData:
if(data == '-1' or data == '1'):
if(type(reportData[data]) is dict):
for subData in reportData[data]:
resultDF2[data+"_"+subData] = reportData[data][subData]
resultDF2['1_precision'] =round(resultDF2['1_precision']*100,2)
resultDF2['1_recall'] =round(resultDF2['1_recall']*100,2)
resultDF2['1_f1-score'] =round(resultDF2['1_f1-score']*100,2)
resultDF2
| Method | Accuracy | Test Score | 1_precision | 1_recall | 1_f1-score | 1_support | |
|---|---|---|---|---|---|---|---|
| 0 | CNN model with 3 classes | 70.06 | 64.8 | 60.74 | 48.29 | 53.8 | 2365 |
Model-1 with 2 classes performing well with good accuracy score when compared with Model-2 with 3 classes. So we chosen Model-1 to train the data augumentation images.
The training generator lets you flow augmented data into the model.
Data augmentation involves perturbing the training data to increase the size of the data and reduce overfitting.
Normalise the pixel values by dividing by 255.
We also used horizontal flips, rotations of up to 40 degrees, and other distortions.
Importantly, the validation and test sets only rescale the pixel intensity.
We are careful not to augment validation and test sets.
train_gen = tf.keras.preprocessing.image.ImageDataGenerator(
# rescale=1/255,
horizontal_flip=True,
rotation_range=40,
shear_range=0.25,
zoom_range=0.2)
#train_gen = tf.keras.preprocessing.image.ImageDataGenerator(
# rotation_range = 40, # randomly rotate images in the range (degrees, 0 to 180)
# zoom_range = 0.2, # Randomly zoom image
# shear_range=0.25
# width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
# height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
# horizontal_flip = True, # randomly flip images
# vertical_flip=False
#)
valid_gen = tf.keras.preprocessing.image.ImageDataGenerator()
train_generator = train_gen.flow(
X_train,
y_train,
batch_size=batch_size)
valid_generator = valid_gen.flow(
X_val,
y_val,
batch_size=batch_size)
# Plotting the augmented images gives a sense of how the images are manipulated.
X_plot, y_plot = next(train_generator)
plot_five_images(X_plot[:5])
# We are going to train the model-1 with data augumentation images.
%%time
random_state=22
# Basic model
model3= Sequential()
model3.add(Conv2D(32,(3,3),input_shape=(128,128,3),activation='relu'))
model3.add(MaxPooling2D(2,2))
model3.add(Conv2D(64,(3,3),activation='relu'))
model3.add(MaxPooling2D(2,2))
model3.add(Dropout(0.2))
model3.add(Conv2D(128,(3,3),activation='relu'))
model3.add(MaxPooling2D(2,2))
model3.add(Dropout(0.2))
model3.add(Conv2D(256,(3,3),activation='relu'))
model3.add(MaxPooling2D(2,2))
model3.add(Flatten())
model3.add(Dropout(0.2))
model3.add(Dense(256,activation='relu'))
model3.add(Dense(1,activation='sigmoid'))
# compiling the model
model3.compile(optimizer=Adam(lr=0.001),loss='binary_crossentropy',metrics=['accuracy'])
# Fitting the model
history3 = model3.fit(x=train_generator,
epochs = epochs,
validation_data=valid_generator,
batch_size = batch_size,
verbose = 1)
Epoch 1/30
2022-08-26 12:41:09.779032: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
182/182 [==============================] - ETA: 0s - loss: 0.5641 - accuracy: 0.7159
2022-08-26 12:41:43.163466: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
182/182 [==============================] - 36s 189ms/step - loss: 0.5641 - accuracy: 0.7159 - val_loss: 0.4940 - val_accuracy: 0.7572 Epoch 2/30 182/182 [==============================] - 31s 168ms/step - loss: 0.4964 - accuracy: 0.7606 - val_loss: 0.4781 - val_accuracy: 0.7717 Epoch 3/30 182/182 [==============================] - 31s 168ms/step - loss: 0.4874 - accuracy: 0.7696 - val_loss: 0.4925 - val_accuracy: 0.7509 Epoch 4/30 182/182 [==============================] - 31s 169ms/step - loss: 0.4865 - accuracy: 0.7721 - val_loss: 0.4756 - val_accuracy: 0.7736 Epoch 5/30 182/182 [==============================] - 60s 332ms/step - loss: 0.4791 - accuracy: 0.7717 - val_loss: 0.4715 - val_accuracy: 0.7746 Epoch 6/30 182/182 [==============================] - 31s 170ms/step - loss: 0.4761 - accuracy: 0.7760 - val_loss: 0.4722 - val_accuracy: 0.7732 Epoch 7/30 182/182 [==============================] - 31s 168ms/step - loss: 0.4732 - accuracy: 0.7793 - val_loss: 0.4695 - val_accuracy: 0.7784 Epoch 8/30 182/182 [==============================] - 31s 169ms/step - loss: 0.4733 - accuracy: 0.7796 - val_loss: 0.4714 - val_accuracy: 0.7794 Epoch 9/30 182/182 [==============================] - 31s 168ms/step - loss: 0.4726 - accuracy: 0.7768 - val_loss: 0.4700 - val_accuracy: 0.7779 Epoch 10/30 182/182 [==============================] - 31s 169ms/step - loss: 0.4683 - accuracy: 0.7819 - val_loss: 0.4582 - val_accuracy: 0.7804 Epoch 11/30 182/182 [==============================] - 31s 168ms/step - loss: 0.4641 - accuracy: 0.7836 - val_loss: 0.4784 - val_accuracy: 0.7787 Epoch 12/30 182/182 [==============================] - 31s 168ms/step - loss: 0.4616 - accuracy: 0.7846 - val_loss: 0.4596 - val_accuracy: 0.7772 Epoch 13/30 182/182 [==============================] - 31s 167ms/step - loss: 0.4636 - accuracy: 0.7833 - val_loss: 0.4552 - val_accuracy: 0.7827 Epoch 14/30 182/182 [==============================] - 31s 168ms/step - loss: 0.4617 - accuracy: 0.7847 - val_loss: 0.4502 - val_accuracy: 0.7865 Epoch 15/30 182/182 [==============================] - 31s 168ms/step - loss: 0.4577 - accuracy: 0.7878 - val_loss: 0.4423 - val_accuracy: 0.7875 Epoch 16/30 182/182 [==============================] - 31s 170ms/step - loss: 0.4563 - accuracy: 0.7862 - val_loss: 0.4508 - val_accuracy: 0.7853 Epoch 17/30 182/182 [==============================] - 31s 168ms/step - loss: 0.4544 - accuracy: 0.7892 - val_loss: 0.4493 - val_accuracy: 0.7868 Epoch 18/30 182/182 [==============================] - 31s 169ms/step - loss: 0.4513 - accuracy: 0.7896 - val_loss: 0.4449 - val_accuracy: 0.7883 Epoch 19/30 182/182 [==============================] - 31s 173ms/step - loss: 0.4493 - accuracy: 0.7914 - val_loss: 0.4381 - val_accuracy: 0.7942 Epoch 20/30 182/182 [==============================] - 32s 174ms/step - loss: 0.4483 - accuracy: 0.7916 - val_loss: 0.4403 - val_accuracy: 0.7926 Epoch 21/30 182/182 [==============================] - 31s 170ms/step - loss: 0.4483 - accuracy: 0.7929 - val_loss: 0.4393 - val_accuracy: 0.7908 Epoch 22/30 182/182 [==============================] - 31s 171ms/step - loss: 0.4479 - accuracy: 0.7928 - val_loss: 0.4442 - val_accuracy: 0.7903 Epoch 23/30 182/182 [==============================] - 31s 168ms/step - loss: 0.4435 - accuracy: 0.7937 - val_loss: 0.4397 - val_accuracy: 0.7903 Epoch 24/30 182/182 [==============================] - 32s 175ms/step - loss: 0.4445 - accuracy: 0.7948 - val_loss: 0.4366 - val_accuracy: 0.7987 Epoch 25/30 182/182 [==============================] - 32s 174ms/step - loss: 0.4414 - accuracy: 0.7954 - val_loss: 0.4454 - val_accuracy: 0.7914 Epoch 26/30 182/182 [==============================] - 32s 175ms/step - loss: 0.4426 - accuracy: 0.7948 - val_loss: 0.4304 - val_accuracy: 0.7994 Epoch 27/30 182/182 [==============================] - 31s 168ms/step - loss: 0.4391 - accuracy: 0.7954 - val_loss: 0.4393 - val_accuracy: 0.7911 Epoch 28/30 182/182 [==============================] - 31s 170ms/step - loss: 0.4394 - accuracy: 0.7951 - val_loss: 0.4316 - val_accuracy: 0.7962 Epoch 29/30 182/182 [==============================] - 31s 170ms/step - loss: 0.4400 - accuracy: 0.7969 - val_loss: 0.4284 - val_accuracy: 0.7982 Epoch 30/30 182/182 [==============================] - 32s 177ms/step - loss: 0.4397 - accuracy: 0.7972 - val_loss: 0.4299 - val_accuracy: 0.7984 CPU times: user 14min 54s, sys: 2min 25s, total: 17min 19s Wall time: 16min 6s
# Evaluating the acuracy , we have got accuracy of 80% for both train and test.
model3_test_loss, model3_test_acc = model3.evaluate(X_test, y_test, verbose=1)
print('Test Loss of the model is -:', model3_test_loss)
print('Test Accuracy of the model is:', model3_test_acc)
189/189 [==============================] - 3s 13ms/step - loss: 0.4289 - accuracy: 0.8028 Test Loss of the model is -: 0.4288730323314667 Test Accuracy of the model is: 0.8028448820114136
## Plottting the accuracy vs loss graph
train3_acc = history3.history['accuracy']
display_loss_accuracy(history3)
# printing the dataframe
# printing the dataframe
resultDF3 = createResult("CNN model-1 with Augumentation"
,round(train3_acc[-1]*100,2),round(model3_test_acc*100,2))
resultDF3
| Method | Accuracy | Test Score | |
|---|---|---|---|
| 0 | CNN model-1 with Augumentation | 79.72 | 80.28 |
# Predict from the test set
threshold = 0.5
# Predict from the test set
y_prob3 = model3.predict(x=X_test)
y_prob3 = y_prob3.reshape((len(y_prob3),))
# Convert to binary probabilities
y_hat3 = y_prob3 > threshold
y_hat3 = y_hat3.reshape(len(y_hat3),)
reportData = classification_report(y_test, y_hat3,output_dict=True)
for data in reportData:
if(data == '-1' or data == '1'):
if(type(reportData[data]) is dict):
for subData in reportData[data]:
resultDF3[data+"_"+subData] = reportData[data][subData]
resultDF3['1_precision'] =round(resultDF3['1_precision']*100,2)
resultDF3['1_recall'] =round(resultDF3['1_recall']*100,2)
resultDF3['1_f1-score'] =round(resultDF3['1_f1-score']*100,2)
resultDF3
7/189 [>.............................] - ETA: 4s
2022-08-26 12:57:24.500213: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
189/189 [==============================] - 2s 9ms/step
| Method | Accuracy | Test Score | 1_precision | 1_recall | 1_f1-score | 1_support | |
|---|---|---|---|---|---|---|---|
| 0 | CNN model-1 with Augumentation | 79.72 | 80.28 | 84.52 | 87.13 | 85.81 | 4135 |
evaluate_model(model3, X_test, y_test)
189/189 [==============================] - 2s 9ms/step ----------------------Confusion Matrix---------------------
-----------------Classification Report-----------------
precision recall f1-score support
0 0.70 0.65 0.68 1911
1 0.85 0.87 0.86 4135
accuracy 0.80 6046
macro avg 0.77 0.76 0.77 6046
weighted avg 0.80 0.80 0.80 6046
-----------------------ROC Curve-----------------------
df =pd.concat([resultDF1,resultDF2,resultDF3],ignore_index=True)
df
| Method | Accuracy | Test Score | 1_precision | 1_recall | 1_f1-score | 1_support | |
|---|---|---|---|---|---|---|---|
| 0 | CNN model with 2 classes | 83.45 | 81.34 | 84.83 | 88.56 | 86.65 | 4135 |
| 1 | CNN model with 3 classes | 70.06 | 64.80 | 60.74 | 48.29 | 53.80 | 2365 |
| 2 | CNN model-1 with Augumentation | 79.72 | 80.28 | 84.52 | 87.13 | 85.81 | 4135 |
## Creating a Copy feature use.
X_train1 = X_train.copy()
X_test1 = X_test.copy()
X_val1 = X_val.copy()
!pwd
/Users/seenu/tensorflow/rsna-pneumonia-detection-challenge
# Insights:
# The best model is Model-1 (with 2 classes) when compared with Model-2 (with 3 classes).
# So, we chosen Model-1 for further analysis.
# The Model-1 with data augmentation performs fairly well with sample images.
# It's tremendously satisfying to see a simple model make substantial headway on learning a difficult task.
# Next steps would include an investigation of alternative model architectures, more robust hyperparameter optimization,
# and investigating transfer learning using models trained on similar images.
VGG-19 is a transfer learning alorithm which means its an algorithm with 16 convolutional layers that focuses on storing knowledge that can be applied to different but relaed problems.
VGGNet is a well-documented and globally used architecture for convolutional neural network.
Include_top=False to remove the classification layer that was trained on the ImageNet dataset and set the model as not trainable
from tensorflow.keras.applications.vgg16 import VGG16
from tensorflow.keras.applications.vgg16 import preprocess_input
from keras.callbacks import ModelCheckpoint, EarlyStopping, ReduceLROnPlateau
from tensorflow.keras import layers, models
base_model = VGG16(weights="imagenet", include_top=False, input_shape=(128,128,3))
base_model.trainable = False ## Not trainable weights
## Adding two hidden later and one softmax layer as an output layer
from tensorflow.keras import layers, models
max_layer = MaxPooling2D(pool_size=(2,2), strides=2)
flatten_layer = layers.Flatten()
dense_layer_1 = layers.Dense(512, activation='relu')
drop_layer_2 = layers.Dropout(0.5)
pred_layer = layers.Dense(1, activation='sigmoid')
CNN_VGG16_model = models.Sequential([
base_model,
max_layer,
flatten_layer,
dense_layer_1,
drop_layer_2,
pred_layer
])
CNN_VGG16_model.summary()
Model: "sequential_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
vgg16 (Functional) (None, 4, 4, 512) 14714688
max_pooling2d_12 (MaxPoolin (None, 2, 2, 512) 0
g2D)
flatten_3 (Flatten) (None, 2048) 0
dense_6 (Dense) (None, 512) 1049088
dropout_9 (Dropout) (None, 512) 0
dense_7 (Dense) (None, 1) 513
=================================================================
Total params: 15,764,289
Trainable params: 1,049,601
Non-trainable params: 14,714,688
_________________________________________________________________
from tensorflow.keras.callbacks import EarlyStopping
models_loss = 'binary_crossentropy'
models_opt = 'adam'
checkpoint = ModelCheckpoint('Best_CNN_VGG16_model' + '.h5',
monitor='val_loss',
mode="min",
save_best_only=True,
verbose=1)
es = EarlyStopping(monitor='val_loss',
min_delta=0,
patience=10,
verbose=1,
restore_best_weights=True)
lr_reduction = ReduceLROnPlateau(monitor='val_loss',
patience=10,
verbose=1,
factor=0.8,
min_lr=0.0001,
mode="auto",
min_delta=0.0001,
cooldown=5)
callbacks = [checkpoint, es, lr_reduction]
CNN_VGG16_model.compile(loss=models_loss,
optimizer=models_opt,
metrics=['accuracy'])
%%time
random_state=22
#Training the model
history4 = CNN_VGG16_model.fit(X_train,
y_train,
epochs=epochs,
validation_data=valid_generator,
batch_size = batch_size,
callbacks=callbacks)
Epoch 1/30
2022-08-26 13:02:09.231203: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
182/182 [==============================] - ETA: 0s - loss: 0.5026 - accuracy: 0.7636
2022-08-26 13:03:01.275267: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
Epoch 1: val_loss improved from inf to 0.45472, saving model to Best_CNN_VGG16_model.h5 182/182 [==============================] - 70s 380ms/step - loss: 0.5026 - accuracy: 0.7636 - val_loss: 0.4547 - val_accuracy: 0.7825 - lr: 0.0010 Epoch 2/30 182/182 [==============================] - ETA: 0s - loss: 0.4542 - accuracy: 0.7843 Epoch 2: val_loss improved from 0.45472 to 0.44443, saving model to Best_CNN_VGG16_model.h5 182/182 [==============================] - 69s 380ms/step - loss: 0.4542 - accuracy: 0.7843 - val_loss: 0.4444 - val_accuracy: 0.7901 - lr: 0.0010 Epoch 3/30 182/182 [==============================] - ETA: 0s - loss: 0.4432 - accuracy: 0.7882 Epoch 3: val_loss improved from 0.44443 to 0.44210, saving model to Best_CNN_VGG16_model.h5 182/182 [==============================] - 80s 440ms/step - loss: 0.4432 - accuracy: 0.7882 - val_loss: 0.4421 - val_accuracy: 0.7904 - lr: 0.0010 Epoch 4/30 182/182 [==============================] - ETA: 0s - loss: 0.4349 - accuracy: 0.7953 Epoch 4: val_loss improved from 0.44210 to 0.43481, saving model to Best_CNN_VGG16_model.h5 182/182 [==============================] - 69s 379ms/step - loss: 0.4349 - accuracy: 0.7953 - val_loss: 0.4348 - val_accuracy: 0.7937 - lr: 0.0010 Epoch 5/30 182/182 [==============================] - ETA: 0s - loss: 0.4309 - accuracy: 0.7945 Epoch 5: val_loss improved from 0.43481 to 0.42557, saving model to Best_CNN_VGG16_model.h5 182/182 [==============================] - 69s 380ms/step - loss: 0.4309 - accuracy: 0.7945 - val_loss: 0.4256 - val_accuracy: 0.8002 - lr: 0.0010 Epoch 6/30 182/182 [==============================] - ETA: 0s - loss: 0.4289 - accuracy: 0.7962 Epoch 6: val_loss improved from 0.42557 to 0.42472, saving model to Best_CNN_VGG16_model.h5 182/182 [==============================] - 69s 382ms/step - loss: 0.4289 - accuracy: 0.7962 - val_loss: 0.4247 - val_accuracy: 0.8019 - lr: 0.0010 Epoch 7/30 182/182 [==============================] - ETA: 0s - loss: 0.4240 - accuracy: 0.8019 Epoch 7: val_loss improved from 0.42472 to 0.42457, saving model to Best_CNN_VGG16_model.h5 182/182 [==============================] - 68s 376ms/step - loss: 0.4240 - accuracy: 0.8019 - val_loss: 0.4246 - val_accuracy: 0.7995 - lr: 0.0010 Epoch 8/30 182/182 [==============================] - ETA: 0s - loss: 0.4219 - accuracy: 0.7980 Epoch 8: val_loss improved from 0.42457 to 0.42045, saving model to Best_CNN_VGG16_model.h5 182/182 [==============================] - 68s 376ms/step - loss: 0.4219 - accuracy: 0.7980 - val_loss: 0.4205 - val_accuracy: 0.8037 - lr: 0.0010 Epoch 9/30 182/182 [==============================] - ETA: 0s - loss: 0.4157 - accuracy: 0.8031 Epoch 9: val_loss did not improve from 0.42045 182/182 [==============================] - 68s 375ms/step - loss: 0.4157 - accuracy: 0.8031 - val_loss: 0.4240 - val_accuracy: 0.7990 - lr: 0.0010 Epoch 10/30 182/182 [==============================] - ETA: 0s - loss: 0.4134 - accuracy: 0.8070 Epoch 10: val_loss improved from 0.42045 to 0.41996, saving model to Best_CNN_VGG16_model.h5 182/182 [==============================] - 68s 376ms/step - loss: 0.4134 - accuracy: 0.8070 - val_loss: 0.4200 - val_accuracy: 0.7984 - lr: 0.0010 Epoch 11/30 182/182 [==============================] - ETA: 0s - loss: 0.4145 - accuracy: 0.8064 Epoch 11: val_loss did not improve from 0.41996 182/182 [==============================] - 68s 375ms/step - loss: 0.4145 - accuracy: 0.8064 - val_loss: 0.4223 - val_accuracy: 0.8023 - lr: 0.0010 Epoch 12/30 182/182 [==============================] - ETA: 0s - loss: 0.4067 - accuracy: 0.8079 Epoch 12: val_loss did not improve from 0.41996 182/182 [==============================] - 68s 376ms/step - loss: 0.4067 - accuracy: 0.8079 - val_loss: 0.4299 - val_accuracy: 0.7980 - lr: 0.0010 Epoch 13/30 182/182 [==============================] - ETA: 0s - loss: 0.4078 - accuracy: 0.8073 Epoch 13: val_loss did not improve from 0.41996 182/182 [==============================] - 68s 375ms/step - loss: 0.4078 - accuracy: 0.8073 - val_loss: 0.4266 - val_accuracy: 0.7969 - lr: 0.0010 Epoch 14/30 182/182 [==============================] - ETA: 0s - loss: 0.4051 - accuracy: 0.8093 Epoch 14: val_loss did not improve from 0.41996 182/182 [==============================] - 68s 376ms/step - loss: 0.4051 - accuracy: 0.8093 - val_loss: 0.4238 - val_accuracy: 0.7999 - lr: 0.0010 Epoch 15/30 182/182 [==============================] - ETA: 0s - loss: 0.3997 - accuracy: 0.8125 Epoch 15: val_loss did not improve from 0.41996 182/182 [==============================] - 68s 376ms/step - loss: 0.3997 - accuracy: 0.8125 - val_loss: 0.4262 - val_accuracy: 0.7976 - lr: 0.0010 Epoch 16/30 182/182 [==============================] - ETA: 0s - loss: 0.4064 - accuracy: 0.8088 Epoch 16: val_loss did not improve from 0.41996 182/182 [==============================] - 220s 1s/step - loss: 0.4064 - accuracy: 0.8088 - val_loss: 0.4545 - val_accuracy: 0.7808 - lr: 0.0010 Epoch 17/30 182/182 [==============================] - ETA: 0s - loss: 0.3974 - accuracy: 0.8125 Epoch 17: val_loss did not improve from 0.41996 182/182 [==============================] - 68s 377ms/step - loss: 0.3974 - accuracy: 0.8125 - val_loss: 0.4210 - val_accuracy: 0.8019 - lr: 0.0010 Epoch 18/30 182/182 [==============================] - ETA: 0s - loss: 0.3934 - accuracy: 0.8149 Epoch 18: val_loss did not improve from 0.41996 182/182 [==============================] - 68s 376ms/step - loss: 0.3934 - accuracy: 0.8149 - val_loss: 0.4215 - val_accuracy: 0.8023 - lr: 0.0010 Epoch 19/30 182/182 [==============================] - ETA: 0s - loss: 0.3886 - accuracy: 0.8180 Epoch 19: val_loss did not improve from 0.41996 182/182 [==============================] - 208s 1s/step - loss: 0.3886 - accuracy: 0.8180 - val_loss: 0.4218 - val_accuracy: 0.7999 - lr: 0.0010 Epoch 20/30 182/182 [==============================] - ETA: 0s - loss: 0.3834 - accuracy: 0.8197 Epoch 20: val_loss did not improve from 0.41996 Restoring model weights from the end of the best epoch: 10. Epoch 20: ReduceLROnPlateau reducing learning rate to 0.000800000037997961. 182/182 [==============================] - 69s 377ms/step - loss: 0.3834 - accuracy: 0.8197 - val_loss: 0.4277 - val_accuracy: 0.8023 - lr: 0.0010 Epoch 20: early stopping CPU times: user 1min 47s, sys: 12min 19s, total: 14min 7s Wall time: 28min 16s
# Evaluating the acuracy , we have only got accuracy of 74% for both train and test.
model4_test_loss, model4_test_acc = CNN_VGG16_model.evaluate(X_test, y_test, verbose=1)
print('Test Loss of the model is -:', model4_test_loss)
print('Test Accuracy of the model is:', model4_test_acc)
189/189 [==============================] - 18s 92ms/step - loss: 0.4227 - accuracy: 0.7985 Test Loss of the model is -: 0.42267507314682007 Test Accuracy of the model is: 0.7985444664955139
# Plottting the accuracy vs loss graph
train4_acc = history4.history['accuracy']
display_loss_accuracy(history4)
# printing the dataframe
resultDF4 = createResult("CNN_VGG16_model with trainable -False",
round(train4_acc[-1]*100),round(model4_test_acc*100,2))
resultDF4
| Method | Accuracy | Test Score | |
|---|---|---|---|
| 0 | CNN_VGG16_model with trainable -False | 82 | 79.85 |
# Predict from the test set
threshold = 0.5
# Predict from the test set
y_prob4 = CNN_VGG16_model.predict(x=X_test)
y_prob4 = y_prob4.reshape((len(y_prob4),))
# Convert to binary probabilities
y_hat4 = y_prob4 > threshold
y_hat4 = y_hat4.reshape(len(y_hat4),)
reportData = classification_report(y_test, y_hat4,output_dict=True)
for data in reportData:
if(data == '-1' or data == '1'):
if(type(reportData[data]) is dict):
for subData in reportData[data]:
resultDF4[data+"_"+subData] = reportData[data][subData]
resultDF4['1_precision'] =round(resultDF4['1_precision']*100,2)
resultDF4['1_recall'] =round(resultDF4['1_recall']*100,2)
resultDF4['1_f1-score'] =round(resultDF4['1_f1-score']*100,2)
resultDF4
2/189 [..............................] - ETA: 10s
2022-08-26 13:36:26.280251: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
189/189 [==============================] - 17s 89ms/step
| Method | Accuracy | Test Score | 1_precision | 1_recall | 1_f1-score | 1_support | |
|---|---|---|---|---|---|---|---|
| 0 | CNN_VGG16_model with trainable -False | 82 | 79.85 | 82.3 | 89.87 | 85.92 | 4135 |
evaluate_model(CNN_VGG16_model, X_test, y_test)
189/189 [==============================] - 17s 89ms/step ----------------------Confusion Matrix---------------------
-----------------Classification Report-----------------
precision recall f1-score support
0 0.73 0.58 0.65 1911
1 0.82 0.90 0.86 4135
accuracy 0.80 6046
macro avg 0.77 0.74 0.75 6046
weighted avg 0.79 0.80 0.79 6046
-----------------------ROC Curve-----------------------
df =pd.concat([resultDF1,resultDF2,resultDF3,resultDF4],ignore_index=True)
df
| Method | Accuracy | Test Score | 1_precision | 1_recall | 1_f1-score | 1_support | |
|---|---|---|---|---|---|---|---|
| 0 | CNN model with 2 classes | 83.45 | 81.34 | 84.83 | 88.56 | 86.65 | 4135 |
| 1 | CNN model with 3 classes | 70.06 | 64.80 | 60.74 | 48.29 | 53.80 | 2365 |
| 2 | CNN model-1 with Augumentation | 79.72 | 80.28 | 84.52 | 87.13 | 85.81 | 4135 |
| 3 | CNN_VGG16_model with trainable -False | 82.00 | 79.85 | 82.30 | 89.87 | 85.92 | 4135 |
base_model1 = VGG16(weights="imagenet", include_top=False, input_shape=(128,128,3))
base_model1.trainable = True ## Not trainable weights
from tensorflow.keras import layers, models
max_layer = MaxPooling2D(pool_size=(2,2), strides=2)
flatten_layer = layers.Flatten()
dense_layer_1 = layers.Dense(512, activation='relu')
drop_layer_2 = layers.Dropout(0.5)
pred_layer = layers.Dense(1, activation='sigmoid')
CNN_VGG16_model1 = models.Sequential([
base_model1,
max_layer,
flatten_layer,
dense_layer_1,
drop_layer_2,
pred_layer
])
CNN_VGG16_model1.compile(loss=models_loss,
optimizer=models_opt,
metrics=['accuracy'])
%%time
random_state=22
#Training the model
history5 = CNN_VGG16_model1.fit(X_train,
y_train,
epochs=epochs,
validation_data=valid_generator,
batch_size = batch_size,
callbacks=callbacks)
Epoch 1/30
2022-08-26 13:37:37.004316: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
182/182 [==============================] - ETA: 0s - loss: 0.6649 - accuracy: 0.6770
2022-08-26 13:40:13.466632: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
Epoch 1: val_loss did not improve from 0.41996 182/182 [==============================] - 172s 937ms/step - loss: 0.6649 - accuracy: 0.6770 - val_loss: 0.6249 - val_accuracy: 0.6839 - lr: 0.0010 Epoch 2/30 182/182 [==============================] - ETA: 0s - loss: 0.6252 - accuracy: 0.6839 Epoch 2: val_loss did not improve from 0.41996 182/182 [==============================] - 181s 992ms/step - loss: 0.6252 - accuracy: 0.6839 - val_loss: 0.6239 - val_accuracy: 0.6839 - lr: 0.0010 Epoch 3/30 182/182 [==============================] - ETA: 0s - loss: 0.6240 - accuracy: 0.6839 Epoch 3: val_loss did not improve from 0.41996 182/182 [==============================] - 171s 938ms/step - loss: 0.6240 - accuracy: 0.6839 - val_loss: 0.6239 - val_accuracy: 0.6839 - lr: 0.0010 Epoch 4/30 182/182 [==============================] - ETA: 0s - loss: 0.6247 - accuracy: 0.6839 Epoch 4: val_loss did not improve from 0.41996 182/182 [==============================] - 170s 935ms/step - loss: 0.6247 - accuracy: 0.6839 - val_loss: 0.6241 - val_accuracy: 0.6839 - lr: 0.0010 Epoch 5/30 182/182 [==============================] - ETA: 0s - loss: 0.6249 - accuracy: 0.6839 Epoch 5: val_loss did not improve from 0.41996 182/182 [==============================] - 171s 938ms/step - loss: 0.6249 - accuracy: 0.6839 - val_loss: 0.6239 - val_accuracy: 0.6839 - lr: 0.0010 Epoch 6/30 182/182 [==============================] - ETA: 0s - loss: 0.6243 - accuracy: 0.6839 Epoch 6: val_loss did not improve from 0.41996 182/182 [==============================] - 170s 936ms/step - loss: 0.6243 - accuracy: 0.6839 - val_loss: 0.6240 - val_accuracy: 0.6839 - lr: 0.0010 Epoch 7/30 182/182 [==============================] - ETA: 0s - loss: 0.6251 - accuracy: 0.6839 Epoch 7: val_loss did not improve from 0.41996 182/182 [==============================] - 170s 936ms/step - loss: 0.6251 - accuracy: 0.6839 - val_loss: 0.6240 - val_accuracy: 0.6839 - lr: 0.0010 Epoch 8/30 182/182 [==============================] - ETA: 0s - loss: 0.6245 - accuracy: 0.6839 Epoch 8: val_loss did not improve from 0.41996 182/182 [==============================] - 170s 936ms/step - loss: 0.6245 - accuracy: 0.6839 - val_loss: 0.6242 - val_accuracy: 0.6839 - lr: 0.0010 Epoch 9/30 182/182 [==============================] - ETA: 0s - loss: 0.6245 - accuracy: 0.6839 Epoch 9: val_loss did not improve from 0.41996 182/182 [==============================] - 171s 938ms/step - loss: 0.6245 - accuracy: 0.6839 - val_loss: 0.6239 - val_accuracy: 0.6839 - lr: 0.0010 Epoch 10/30 182/182 [==============================] - ETA: 0s - loss: 0.6242 - accuracy: 0.6839 Epoch 10: val_loss did not improve from 0.41996 182/182 [==============================] - 171s 939ms/step - loss: 0.6242 - accuracy: 0.6839 - val_loss: 0.6243 - val_accuracy: 0.6839 - lr: 0.0010 Epoch 11/30 182/182 [==============================] - ETA: 0s - loss: 0.6249 - accuracy: 0.6839 Epoch 11: val_loss did not improve from 0.41996 182/182 [==============================] - 170s 937ms/step - loss: 0.6249 - accuracy: 0.6839 - val_loss: 0.6239 - val_accuracy: 0.6839 - lr: 0.0010 Epoch 12/30 182/182 [==============================] - ETA: 0s - loss: 0.6246 - accuracy: 0.6839 Epoch 12: val_loss did not improve from 0.41996 Restoring model weights from the end of the best epoch: 2. Epoch 12: ReduceLROnPlateau reducing learning rate to 0.000800000037997961. 182/182 [==============================] - 171s 937ms/step - loss: 0.6246 - accuracy: 0.6839 - val_loss: 0.6246 - val_accuracy: 0.6839 - lr: 0.0010 Epoch 12: early stopping CPU times: user 1min 16s, sys: 10min 26s, total: 11min 42s Wall time: 34min 36s
#CNN_VGG16_model1.save('best_VGG16_model_trainable.h5')
#best_model = tf.keras.models.load_model('best_VGG16_model_trainable.h5')
model5_test_loss, model5_test_acc = CNN_VGG16_model1.evaluate(X_test, y_test, verbose=1)
print('Test Loss of the model is -:', model5_test_loss)
print('Test Accuracy of the model is:', model5_test_acc)
189/189 [==============================] - 15s 79ms/step - loss: 0.6239 - accuracy: 0.6839 Test Loss of the model is -: 0.6238886713981628 Test Accuracy of the model is: 0.6839232444763184
#Plottting the accuracy vs loss graph
train5_acc = history5.history['accuracy']
#Plottting the accuracy vs loss graph
display_loss_accuracy(history5)
# printing the dataframe
resultDF5 = createResult("CNN_VGG16_model with trainable -True",
round(train5_acc[-1]*100,2),round(model5_test_acc*100,2))
resultDF5
| Method | Accuracy | Test Score | |
|---|---|---|---|
| 0 | CNN_VGG16_model with trainable -True | 68.39 | 68.39 |
# Predict from the test set
threshold = 0.5
# Predict from the test set
y_prob5 = CNN_VGG16_model1.predict(x=X_test)
y_prob5 = y_prob5.reshape((len(y_prob5),))
# Convert to binary probabilities
y_hat5 = y_prob5 > threshold
y_hat5 = y_hat5.reshape(len(y_hat5),)
reportData = classification_report(y_test, y_hat5,output_dict=True)
for data in reportData:
if(data == '-1' or data == '1'):
if(type(reportData[data]) is dict):
for subData in reportData[data]:
resultDF5[data+"_"+subData] = reportData[data][subData]
resultDF5['1_precision'] =round(resultDF5['1_precision']*100,2)
resultDF5['1_recall'] =round(resultDF5['1_recall']*100,2)
resultDF5['1_f1-score'] =round(resultDF5['1_f1-score']*100,2)
resultDF5
1/189 [..............................] - ETA: 56s
2022-08-26 14:21:10.430084: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
189/189 [==============================] - 17s 90ms/step
| Method | Accuracy | Test Score | 1_precision | 1_recall | 1_f1-score | 1_support | |
|---|---|---|---|---|---|---|---|
| 0 | CNN_VGG16_model with trainable -True | 68.39 | 68.39 | 68.39 | 100.0 | 81.23 | 4135 |
evaluate_model(CNN_VGG16_model1, X_test, y_test)
189/189 [==============================] - 17s 89ms/step ----------------------Confusion Matrix---------------------
-----------------Classification Report-----------------
precision recall f1-score support
0 0.00 0.00 0.00 1911
1 0.68 1.00 0.81 4135
accuracy 0.68 6046
macro avg 0.34 0.50 0.41 6046
weighted avg 0.47 0.68 0.56 6046
-----------------------ROC Curve-----------------------
df =pd.concat([resultDF1,resultDF2,resultDF3,resultDF4,resultDF5],ignore_index=True)
df
| Method | Accuracy | Test Score | 1_precision | 1_recall | 1_f1-score | 1_support | |
|---|---|---|---|---|---|---|---|
| 0 | CNN model with 2 classes | 83.45 | 81.34 | 84.83 | 88.56 | 86.65 | 4135 |
| 1 | CNN model with 3 classes | 70.06 | 64.80 | 60.74 | 48.29 | 53.80 | 2365 |
| 2 | CNN model-1 with Augumentation | 79.72 | 80.28 | 84.52 | 87.13 | 85.81 | 4135 |
| 3 | CNN_VGG16_model with trainable -False | 82.00 | 79.85 | 82.30 | 89.87 | 85.92 | 4135 |
| 4 | CNN_VGG16_model with trainable -True | 68.39 | 68.39 | 68.39 | 100.00 | 81.23 | 4135 |
Also known as GoogleNet, this architecture presents sub-networks called inception modules, which allows fast training computing, complex patterns detection, and optimal use of parameters
from keras.applications import InceptionV3
inception_base_model = InceptionV3(input_shape=(128,128,3),include_top=False,weights='imagenet')
inception_model = Sequential([
inception_base_model,
GlobalAveragePooling2D(),
Dense(512, activation="relu"),
BatchNormalization(),
Dropout(0.6),
Dense(128, activation="relu"),
BatchNormalization(),
Dropout(0.4),
Dense(64,activation="relu"),
BatchNormalization(),
Dropout(0.3),
Dense(1,activation="sigmoid")
])
%%time
random_state=22
#opt = tf.keras.optimizers.Adam(lr=0.001)
inception_model.compile(optimizer = RMSprop(learning_rate=0.0001),
loss = 'binary_crossentropy',metrics=['accuracy'])
history6 = inception_model.fit(train_generator,
batch_size = batch_size,
epochs = epochs,
validation_data = valid_generator,
verbose = 1)
Epoch 1/30
2022-08-26 14:32:50.602792: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
182/182 [==============================] - ETA: 0s - loss: 0.7606 - accuracy: 0.6233
2022-08-26 14:34:18.927969: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
182/182 [==============================] - 100s 470ms/step - loss: 0.7606 - accuracy: 0.6233 - val_loss: 0.8997 - val_accuracy: 0.6530 Epoch 2/30 182/182 [==============================] - 688s 4s/step - loss: 0.5992 - accuracy: 0.7251 - val_loss: 0.6641 - val_accuracy: 0.7244 Epoch 3/30 182/182 [==============================] - 79s 432ms/step - loss: 0.5146 - accuracy: 0.7695 - val_loss: 0.5743 - val_accuracy: 0.7890 Epoch 4/30 182/182 [==============================] - 154s 848ms/step - loss: 0.4736 - accuracy: 0.7882 - val_loss: 0.4506 - val_accuracy: 0.8167 Epoch 5/30 182/182 [==============================] - 79s 430ms/step - loss: 0.4411 - accuracy: 0.8015 - val_loss: 0.4425 - val_accuracy: 0.8126 Epoch 6/30 182/182 [==============================] - 421s 2s/step - loss: 0.4246 - accuracy: 0.8087 - val_loss: 0.4848 - val_accuracy: 0.8146 Epoch 7/30 182/182 [==============================] - 79s 430ms/step - loss: 0.4136 - accuracy: 0.8124 - val_loss: 0.4889 - val_accuracy: 0.8273 Epoch 8/30 182/182 [==============================] - 79s 431ms/step - loss: 0.4054 - accuracy: 0.8199 - val_loss: 0.4801 - val_accuracy: 0.8015 Epoch 9/30 182/182 [==============================] - 79s 432ms/step - loss: 0.3977 - accuracy: 0.8210 - val_loss: 0.4394 - val_accuracy: 0.8156 Epoch 10/30 182/182 [==============================] - 87s 474ms/step - loss: 0.3904 - accuracy: 0.8227 - val_loss: 0.4784 - val_accuracy: 0.8119 Epoch 11/30 182/182 [==============================] - 89s 484ms/step - loss: 0.3819 - accuracy: 0.8268 - val_loss: 0.4036 - val_accuracy: 0.8301 Epoch 12/30 182/182 [==============================] - 80s 440ms/step - loss: 0.3727 - accuracy: 0.8281 - val_loss: 0.5054 - val_accuracy: 0.8232 Epoch 13/30 182/182 [==============================] - 79s 432ms/step - loss: 0.3696 - accuracy: 0.8310 - val_loss: 0.4827 - val_accuracy: 0.8037 Epoch 14/30 182/182 [==============================] - 79s 431ms/step - loss: 0.3603 - accuracy: 0.8374 - val_loss: 0.4979 - val_accuracy: 0.7911 Epoch 15/30 182/182 [==============================] - 79s 434ms/step - loss: 0.3555 - accuracy: 0.8413 - val_loss: 0.4392 - val_accuracy: 0.8277 Epoch 16/30 182/182 [==============================] - 79s 433ms/step - loss: 0.3515 - accuracy: 0.8405 - val_loss: 0.4269 - val_accuracy: 0.8348 Epoch 17/30 182/182 [==============================] - 79s 432ms/step - loss: 0.3474 - accuracy: 0.8412 - val_loss: 0.4276 - val_accuracy: 0.8277 Epoch 18/30 182/182 [==============================] - 79s 431ms/step - loss: 0.3380 - accuracy: 0.8474 - val_loss: 0.4392 - val_accuracy: 0.8313 Epoch 19/30 182/182 [==============================] - 79s 429ms/step - loss: 0.3347 - accuracy: 0.8500 - val_loss: 0.4298 - val_accuracy: 0.8227 Epoch 20/30 182/182 [==============================] - 79s 430ms/step - loss: 0.3216 - accuracy: 0.8598 - val_loss: 0.3985 - val_accuracy: 0.8250 Epoch 21/30 182/182 [==============================] - 79s 431ms/step - loss: 0.3168 - accuracy: 0.8581 - val_loss: 0.4265 - val_accuracy: 0.8311 Epoch 22/30 182/182 [==============================] - 79s 431ms/step - loss: 0.3128 - accuracy: 0.8633 - val_loss: 0.4714 - val_accuracy: 0.8296 Epoch 23/30 182/182 [==============================] - 79s 431ms/step - loss: 0.3107 - accuracy: 0.8647 - val_loss: 0.4494 - val_accuracy: 0.8290 Epoch 24/30 182/182 [==============================] - 79s 431ms/step - loss: 0.3025 - accuracy: 0.8677 - val_loss: 0.4395 - val_accuracy: 0.8295 Epoch 25/30 182/182 [==============================] - 79s 431ms/step - loss: 0.2929 - accuracy: 0.8720 - val_loss: 0.4836 - val_accuracy: 0.8161 Epoch 26/30 182/182 [==============================] - 79s 431ms/step - loss: 0.2843 - accuracy: 0.8780 - val_loss: 0.5295 - val_accuracy: 0.8247 Epoch 27/30 182/182 [==============================] - 79s 434ms/step - loss: 0.2773 - accuracy: 0.8805 - val_loss: 0.5318 - val_accuracy: 0.8205 Epoch 28/30 182/182 [==============================] - 79s 430ms/step - loss: 0.2721 - accuracy: 0.8850 - val_loss: 0.4881 - val_accuracy: 0.8353 Epoch 29/30 182/182 [==============================] - 79s 431ms/step - loss: 0.2652 - accuracy: 0.8888 - val_loss: 0.4432 - val_accuracy: 0.8282 Epoch 30/30 182/182 [==============================] - 79s 431ms/step - loss: 0.2631 - accuracy: 0.8892 - val_loss: 0.5472 - val_accuracy: 0.8215 CPU times: user 28min 49s, sys: 9min 39s, total: 38min 28s Wall time: 57min 17s
model6_test_loss, model6_test_acc = inception_model.evaluate(X_test, y_test, verbose=1)
print('Test Loss of the model is -:', model6_test_loss)
print('Test Accuracy of the model is:', model6_test_acc)
189/189 [==============================] - 9s 43ms/step - loss: 0.5557 - accuracy: 0.8204 Test Loss of the model is -: 0.5556652545928955 Test Accuracy of the model is: 0.8203771114349365
train6_acc = history6.history['accuracy']
#Plottting the accuracy vs loss graph
display_loss_accuracy(history6)
# printing the dataframe
resultDF6 = createResult("inception_base_model",
round(train6_acc[-1]*100,2),round(model6_test_acc*100,2))
resultDF6
| Method | Accuracy | Test Score | |
|---|---|---|---|
| 0 | inception_base_model | 88.92 | 82.04 |
evaluate_model(inception_model, X_test, y_test)
2022-08-26 15:30:19.867196: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
189/189 [==============================] - 10s 44ms/step ----------------------Confusion Matrix---------------------
-----------------Classification Report-----------------
precision recall f1-score support
0 0.83 0.54 0.66 1911
1 0.82 0.95 0.88 4135
accuracy 0.82 6046
macro avg 0.83 0.74 0.77 6046
weighted avg 0.82 0.82 0.81 6046
-----------------------ROC Curve-----------------------
# Predict from the test set
threshold = 0.5
# Predict from the test set
y_prob6 = inception_model.predict(x=X_test)
y_prob6 = y_prob6.reshape((len(y_prob6),))
# Convert to binary probabilities
y_hat6 = y_prob6 > threshold
y_hat6 = y_hat6.reshape(len(y_hat6),)
reportData = classification_report(y_test, y_hat6,output_dict=True)
for data in reportData:
if(data == '-1' or data == '1'):
if(type(reportData[data]) is dict):
for subData in reportData[data]:
resultDF6[data+"_"+subData] = reportData[data][subData]
resultDF6['1_precision'] =round(resultDF6['1_precision']*100,2)
resultDF6['1_recall'] =round(resultDF6['1_recall']*100,2)
resultDF6['1_f1-score'] =round(resultDF6['1_f1-score']*100,2)
resultDF6
189/189 [==============================] - 7s 36ms/step
| Method | Accuracy | Test Score | 1_precision | 1_recall | 1_f1-score | 1_support | |
|---|---|---|---|---|---|---|---|
| 0 | inception_base_model | 88.92 | 82.04 | 81.71 | 94.99 | 87.86 | 4135 |
df =pd.concat([resultDF1,resultDF2,resultDF3,resultDF4,
resultDF5,resultDF6],ignore_index=True)
df
| Method | Accuracy | Test Score | 1_precision | 1_recall | 1_f1-score | 1_support | |
|---|---|---|---|---|---|---|---|
| 0 | CNN model with 2 classes | 83.45 | 81.34 | 84.83 | 88.56 | 86.65 | 4135 |
| 1 | CNN model with 3 classes | 70.06 | 64.80 | 60.74 | 48.29 | 53.80 | 2365 |
| 2 | CNN model-1 with Augumentation | 79.72 | 80.28 | 84.52 | 87.13 | 85.81 | 4135 |
| 3 | CNN_VGG16_model with trainable -False | 82.00 | 79.85 | 82.30 | 89.87 | 85.92 | 4135 |
| 4 | CNN_VGG16_model with trainable -True | 68.39 | 68.39 | 68.39 | 100.00 | 81.23 | 4135 |
| 5 | inception_base_model | 88.92 | 82.04 | 81.71 | 94.99 | 87.86 | 4135 |
from keras.applications.densenet import DenseNet121
from keras.layers import Dense, GlobalAveragePooling2D
from keras.models import Model
from keras import backend as K
base_model = DenseNet121(input_shape=(128, 128, 3), include_top=False, weights='imagenet', pooling='avg')
base_model.summary()
Model: "densenet121"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_4 (InputLayer) [(None, 128, 128, 3 0 []
)]
zero_padding2d (ZeroPadding2D) (None, 134, 134, 3) 0 ['input_4[0][0]']
conv1/conv (Conv2D) (None, 64, 64, 64) 9408 ['zero_padding2d[0][0]']
conv1/bn (BatchNormalization) (None, 64, 64, 64) 256 ['conv1/conv[0][0]']
conv1/relu (Activation) (None, 64, 64, 64) 0 ['conv1/bn[0][0]']
zero_padding2d_1 (ZeroPadding2 (None, 66, 66, 64) 0 ['conv1/relu[0][0]']
D)
pool1 (MaxPooling2D) (None, 32, 32, 64) 0 ['zero_padding2d_1[0][0]']
conv2_block1_0_bn (BatchNormal (None, 32, 32, 64) 256 ['pool1[0][0]']
ization)
conv2_block1_0_relu (Activatio (None, 32, 32, 64) 0 ['conv2_block1_0_bn[0][0]']
n)
conv2_block1_1_conv (Conv2D) (None, 32, 32, 128) 8192 ['conv2_block1_0_relu[0][0]']
conv2_block1_1_bn (BatchNormal (None, 32, 32, 128) 512 ['conv2_block1_1_conv[0][0]']
ization)
conv2_block1_1_relu (Activatio (None, 32, 32, 128) 0 ['conv2_block1_1_bn[0][0]']
n)
conv2_block1_2_conv (Conv2D) (None, 32, 32, 32) 36864 ['conv2_block1_1_relu[0][0]']
conv2_block1_concat (Concatena (None, 32, 32, 96) 0 ['pool1[0][0]',
te) 'conv2_block1_2_conv[0][0]']
conv2_block2_0_bn (BatchNormal (None, 32, 32, 96) 384 ['conv2_block1_concat[0][0]']
ization)
conv2_block2_0_relu (Activatio (None, 32, 32, 96) 0 ['conv2_block2_0_bn[0][0]']
n)
conv2_block2_1_conv (Conv2D) (None, 32, 32, 128) 12288 ['conv2_block2_0_relu[0][0]']
conv2_block2_1_bn (BatchNormal (None, 32, 32, 128) 512 ['conv2_block2_1_conv[0][0]']
ization)
conv2_block2_1_relu (Activatio (None, 32, 32, 128) 0 ['conv2_block2_1_bn[0][0]']
n)
conv2_block2_2_conv (Conv2D) (None, 32, 32, 32) 36864 ['conv2_block2_1_relu[0][0]']
conv2_block2_concat (Concatena (None, 32, 32, 128) 0 ['conv2_block1_concat[0][0]',
te) 'conv2_block2_2_conv[0][0]']
conv2_block3_0_bn (BatchNormal (None, 32, 32, 128) 512 ['conv2_block2_concat[0][0]']
ization)
conv2_block3_0_relu (Activatio (None, 32, 32, 128) 0 ['conv2_block3_0_bn[0][0]']
n)
conv2_block3_1_conv (Conv2D) (None, 32, 32, 128) 16384 ['conv2_block3_0_relu[0][0]']
conv2_block3_1_bn (BatchNormal (None, 32, 32, 128) 512 ['conv2_block3_1_conv[0][0]']
ization)
conv2_block3_1_relu (Activatio (None, 32, 32, 128) 0 ['conv2_block3_1_bn[0][0]']
n)
conv2_block3_2_conv (Conv2D) (None, 32, 32, 32) 36864 ['conv2_block3_1_relu[0][0]']
conv2_block3_concat (Concatena (None, 32, 32, 160) 0 ['conv2_block2_concat[0][0]',
te) 'conv2_block3_2_conv[0][0]']
conv2_block4_0_bn (BatchNormal (None, 32, 32, 160) 640 ['conv2_block3_concat[0][0]']
ization)
conv2_block4_0_relu (Activatio (None, 32, 32, 160) 0 ['conv2_block4_0_bn[0][0]']
n)
conv2_block4_1_conv (Conv2D) (None, 32, 32, 128) 20480 ['conv2_block4_0_relu[0][0]']
conv2_block4_1_bn (BatchNormal (None, 32, 32, 128) 512 ['conv2_block4_1_conv[0][0]']
ization)
conv2_block4_1_relu (Activatio (None, 32, 32, 128) 0 ['conv2_block4_1_bn[0][0]']
n)
conv2_block4_2_conv (Conv2D) (None, 32, 32, 32) 36864 ['conv2_block4_1_relu[0][0]']
conv2_block4_concat (Concatena (None, 32, 32, 192) 0 ['conv2_block3_concat[0][0]',
te) 'conv2_block4_2_conv[0][0]']
conv2_block5_0_bn (BatchNormal (None, 32, 32, 192) 768 ['conv2_block4_concat[0][0]']
ization)
conv2_block5_0_relu (Activatio (None, 32, 32, 192) 0 ['conv2_block5_0_bn[0][0]']
n)
conv2_block5_1_conv (Conv2D) (None, 32, 32, 128) 24576 ['conv2_block5_0_relu[0][0]']
conv2_block5_1_bn (BatchNormal (None, 32, 32, 128) 512 ['conv2_block5_1_conv[0][0]']
ization)
conv2_block5_1_relu (Activatio (None, 32, 32, 128) 0 ['conv2_block5_1_bn[0][0]']
n)
conv2_block5_2_conv (Conv2D) (None, 32, 32, 32) 36864 ['conv2_block5_1_relu[0][0]']
conv2_block5_concat (Concatena (None, 32, 32, 224) 0 ['conv2_block4_concat[0][0]',
te) 'conv2_block5_2_conv[0][0]']
conv2_block6_0_bn (BatchNormal (None, 32, 32, 224) 896 ['conv2_block5_concat[0][0]']
ization)
conv2_block6_0_relu (Activatio (None, 32, 32, 224) 0 ['conv2_block6_0_bn[0][0]']
n)
conv2_block6_1_conv (Conv2D) (None, 32, 32, 128) 28672 ['conv2_block6_0_relu[0][0]']
conv2_block6_1_bn (BatchNormal (None, 32, 32, 128) 512 ['conv2_block6_1_conv[0][0]']
ization)
conv2_block6_1_relu (Activatio (None, 32, 32, 128) 0 ['conv2_block6_1_bn[0][0]']
n)
conv2_block6_2_conv (Conv2D) (None, 32, 32, 32) 36864 ['conv2_block6_1_relu[0][0]']
conv2_block6_concat (Concatena (None, 32, 32, 256) 0 ['conv2_block5_concat[0][0]',
te) 'conv2_block6_2_conv[0][0]']
pool2_bn (BatchNormalization) (None, 32, 32, 256) 1024 ['conv2_block6_concat[0][0]']
pool2_relu (Activation) (None, 32, 32, 256) 0 ['pool2_bn[0][0]']
pool2_conv (Conv2D) (None, 32, 32, 128) 32768 ['pool2_relu[0][0]']
pool2_pool (AveragePooling2D) (None, 16, 16, 128) 0 ['pool2_conv[0][0]']
conv3_block1_0_bn (BatchNormal (None, 16, 16, 128) 512 ['pool2_pool[0][0]']
ization)
conv3_block1_0_relu (Activatio (None, 16, 16, 128) 0 ['conv3_block1_0_bn[0][0]']
n)
conv3_block1_1_conv (Conv2D) (None, 16, 16, 128) 16384 ['conv3_block1_0_relu[0][0]']
conv3_block1_1_bn (BatchNormal (None, 16, 16, 128) 512 ['conv3_block1_1_conv[0][0]']
ization)
conv3_block1_1_relu (Activatio (None, 16, 16, 128) 0 ['conv3_block1_1_bn[0][0]']
n)
conv3_block1_2_conv (Conv2D) (None, 16, 16, 32) 36864 ['conv3_block1_1_relu[0][0]']
conv3_block1_concat (Concatena (None, 16, 16, 160) 0 ['pool2_pool[0][0]',
te) 'conv3_block1_2_conv[0][0]']
conv3_block2_0_bn (BatchNormal (None, 16, 16, 160) 640 ['conv3_block1_concat[0][0]']
ization)
conv3_block2_0_relu (Activatio (None, 16, 16, 160) 0 ['conv3_block2_0_bn[0][0]']
n)
conv3_block2_1_conv (Conv2D) (None, 16, 16, 128) 20480 ['conv3_block2_0_relu[0][0]']
conv3_block2_1_bn (BatchNormal (None, 16, 16, 128) 512 ['conv3_block2_1_conv[0][0]']
ization)
conv3_block2_1_relu (Activatio (None, 16, 16, 128) 0 ['conv3_block2_1_bn[0][0]']
n)
conv3_block2_2_conv (Conv2D) (None, 16, 16, 32) 36864 ['conv3_block2_1_relu[0][0]']
conv3_block2_concat (Concatena (None, 16, 16, 192) 0 ['conv3_block1_concat[0][0]',
te) 'conv3_block2_2_conv[0][0]']
conv3_block3_0_bn (BatchNormal (None, 16, 16, 192) 768 ['conv3_block2_concat[0][0]']
ization)
conv3_block3_0_relu (Activatio (None, 16, 16, 192) 0 ['conv3_block3_0_bn[0][0]']
n)
conv3_block3_1_conv (Conv2D) (None, 16, 16, 128) 24576 ['conv3_block3_0_relu[0][0]']
conv3_block3_1_bn (BatchNormal (None, 16, 16, 128) 512 ['conv3_block3_1_conv[0][0]']
ization)
conv3_block3_1_relu (Activatio (None, 16, 16, 128) 0 ['conv3_block3_1_bn[0][0]']
n)
conv3_block3_2_conv (Conv2D) (None, 16, 16, 32) 36864 ['conv3_block3_1_relu[0][0]']
conv3_block3_concat (Concatena (None, 16, 16, 224) 0 ['conv3_block2_concat[0][0]',
te) 'conv3_block3_2_conv[0][0]']
conv3_block4_0_bn (BatchNormal (None, 16, 16, 224) 896 ['conv3_block3_concat[0][0]']
ization)
conv3_block4_0_relu (Activatio (None, 16, 16, 224) 0 ['conv3_block4_0_bn[0][0]']
n)
conv3_block4_1_conv (Conv2D) (None, 16, 16, 128) 28672 ['conv3_block4_0_relu[0][0]']
conv3_block4_1_bn (BatchNormal (None, 16, 16, 128) 512 ['conv3_block4_1_conv[0][0]']
ization)
conv3_block4_1_relu (Activatio (None, 16, 16, 128) 0 ['conv3_block4_1_bn[0][0]']
n)
conv3_block4_2_conv (Conv2D) (None, 16, 16, 32) 36864 ['conv3_block4_1_relu[0][0]']
conv3_block4_concat (Concatena (None, 16, 16, 256) 0 ['conv3_block3_concat[0][0]',
te) 'conv3_block4_2_conv[0][0]']
conv3_block5_0_bn (BatchNormal (None, 16, 16, 256) 1024 ['conv3_block4_concat[0][0]']
ization)
conv3_block5_0_relu (Activatio (None, 16, 16, 256) 0 ['conv3_block5_0_bn[0][0]']
n)
conv3_block5_1_conv (Conv2D) (None, 16, 16, 128) 32768 ['conv3_block5_0_relu[0][0]']
conv3_block5_1_bn (BatchNormal (None, 16, 16, 128) 512 ['conv3_block5_1_conv[0][0]']
ization)
conv3_block5_1_relu (Activatio (None, 16, 16, 128) 0 ['conv3_block5_1_bn[0][0]']
n)
conv3_block5_2_conv (Conv2D) (None, 16, 16, 32) 36864 ['conv3_block5_1_relu[0][0]']
conv3_block5_concat (Concatena (None, 16, 16, 288) 0 ['conv3_block4_concat[0][0]',
te) 'conv3_block5_2_conv[0][0]']
conv3_block6_0_bn (BatchNormal (None, 16, 16, 288) 1152 ['conv3_block5_concat[0][0]']
ization)
conv3_block6_0_relu (Activatio (None, 16, 16, 288) 0 ['conv3_block6_0_bn[0][0]']
n)
conv3_block6_1_conv (Conv2D) (None, 16, 16, 128) 36864 ['conv3_block6_0_relu[0][0]']
conv3_block6_1_bn (BatchNormal (None, 16, 16, 128) 512 ['conv3_block6_1_conv[0][0]']
ization)
conv3_block6_1_relu (Activatio (None, 16, 16, 128) 0 ['conv3_block6_1_bn[0][0]']
n)
conv3_block6_2_conv (Conv2D) (None, 16, 16, 32) 36864 ['conv3_block6_1_relu[0][0]']
conv3_block6_concat (Concatena (None, 16, 16, 320) 0 ['conv3_block5_concat[0][0]',
te) 'conv3_block6_2_conv[0][0]']
conv3_block7_0_bn (BatchNormal (None, 16, 16, 320) 1280 ['conv3_block6_concat[0][0]']
ization)
conv3_block7_0_relu (Activatio (None, 16, 16, 320) 0 ['conv3_block7_0_bn[0][0]']
n)
conv3_block7_1_conv (Conv2D) (None, 16, 16, 128) 40960 ['conv3_block7_0_relu[0][0]']
conv3_block7_1_bn (BatchNormal (None, 16, 16, 128) 512 ['conv3_block7_1_conv[0][0]']
ization)
conv3_block7_1_relu (Activatio (None, 16, 16, 128) 0 ['conv3_block7_1_bn[0][0]']
n)
conv3_block7_2_conv (Conv2D) (None, 16, 16, 32) 36864 ['conv3_block7_1_relu[0][0]']
conv3_block7_concat (Concatena (None, 16, 16, 352) 0 ['conv3_block6_concat[0][0]',
te) 'conv3_block7_2_conv[0][0]']
conv3_block8_0_bn (BatchNormal (None, 16, 16, 352) 1408 ['conv3_block7_concat[0][0]']
ization)
conv3_block8_0_relu (Activatio (None, 16, 16, 352) 0 ['conv3_block8_0_bn[0][0]']
n)
conv3_block8_1_conv (Conv2D) (None, 16, 16, 128) 45056 ['conv3_block8_0_relu[0][0]']
conv3_block8_1_bn (BatchNormal (None, 16, 16, 128) 512 ['conv3_block8_1_conv[0][0]']
ization)
conv3_block8_1_relu (Activatio (None, 16, 16, 128) 0 ['conv3_block8_1_bn[0][0]']
n)
conv3_block8_2_conv (Conv2D) (None, 16, 16, 32) 36864 ['conv3_block8_1_relu[0][0]']
conv3_block8_concat (Concatena (None, 16, 16, 384) 0 ['conv3_block7_concat[0][0]',
te) 'conv3_block8_2_conv[0][0]']
conv3_block9_0_bn (BatchNormal (None, 16, 16, 384) 1536 ['conv3_block8_concat[0][0]']
ization)
conv3_block9_0_relu (Activatio (None, 16, 16, 384) 0 ['conv3_block9_0_bn[0][0]']
n)
conv3_block9_1_conv (Conv2D) (None, 16, 16, 128) 49152 ['conv3_block9_0_relu[0][0]']
conv3_block9_1_bn (BatchNormal (None, 16, 16, 128) 512 ['conv3_block9_1_conv[0][0]']
ization)
conv3_block9_1_relu (Activatio (None, 16, 16, 128) 0 ['conv3_block9_1_bn[0][0]']
n)
conv3_block9_2_conv (Conv2D) (None, 16, 16, 32) 36864 ['conv3_block9_1_relu[0][0]']
conv3_block9_concat (Concatena (None, 16, 16, 416) 0 ['conv3_block8_concat[0][0]',
te) 'conv3_block9_2_conv[0][0]']
conv3_block10_0_bn (BatchNorma (None, 16, 16, 416) 1664 ['conv3_block9_concat[0][0]']
lization)
conv3_block10_0_relu (Activati (None, 16, 16, 416) 0 ['conv3_block10_0_bn[0][0]']
on)
conv3_block10_1_conv (Conv2D) (None, 16, 16, 128) 53248 ['conv3_block10_0_relu[0][0]']
conv3_block10_1_bn (BatchNorma (None, 16, 16, 128) 512 ['conv3_block10_1_conv[0][0]']
lization)
conv3_block10_1_relu (Activati (None, 16, 16, 128) 0 ['conv3_block10_1_bn[0][0]']
on)
conv3_block10_2_conv (Conv2D) (None, 16, 16, 32) 36864 ['conv3_block10_1_relu[0][0]']
conv3_block10_concat (Concaten (None, 16, 16, 448) 0 ['conv3_block9_concat[0][0]',
ate) 'conv3_block10_2_conv[0][0]']
conv3_block11_0_bn (BatchNorma (None, 16, 16, 448) 1792 ['conv3_block10_concat[0][0]']
lization)
conv3_block11_0_relu (Activati (None, 16, 16, 448) 0 ['conv3_block11_0_bn[0][0]']
on)
conv3_block11_1_conv (Conv2D) (None, 16, 16, 128) 57344 ['conv3_block11_0_relu[0][0]']
conv3_block11_1_bn (BatchNorma (None, 16, 16, 128) 512 ['conv3_block11_1_conv[0][0]']
lization)
conv3_block11_1_relu (Activati (None, 16, 16, 128) 0 ['conv3_block11_1_bn[0][0]']
on)
conv3_block11_2_conv (Conv2D) (None, 16, 16, 32) 36864 ['conv3_block11_1_relu[0][0]']
conv3_block11_concat (Concaten (None, 16, 16, 480) 0 ['conv3_block10_concat[0][0]',
ate) 'conv3_block11_2_conv[0][0]']
conv3_block12_0_bn (BatchNorma (None, 16, 16, 480) 1920 ['conv3_block11_concat[0][0]']
lization)
conv3_block12_0_relu (Activati (None, 16, 16, 480) 0 ['conv3_block12_0_bn[0][0]']
on)
conv3_block12_1_conv (Conv2D) (None, 16, 16, 128) 61440 ['conv3_block12_0_relu[0][0]']
conv3_block12_1_bn (BatchNorma (None, 16, 16, 128) 512 ['conv3_block12_1_conv[0][0]']
lization)
conv3_block12_1_relu (Activati (None, 16, 16, 128) 0 ['conv3_block12_1_bn[0][0]']
on)
conv3_block12_2_conv (Conv2D) (None, 16, 16, 32) 36864 ['conv3_block12_1_relu[0][0]']
conv3_block12_concat (Concaten (None, 16, 16, 512) 0 ['conv3_block11_concat[0][0]',
ate) 'conv3_block12_2_conv[0][0]']
pool3_bn (BatchNormalization) (None, 16, 16, 512) 2048 ['conv3_block12_concat[0][0]']
pool3_relu (Activation) (None, 16, 16, 512) 0 ['pool3_bn[0][0]']
pool3_conv (Conv2D) (None, 16, 16, 256) 131072 ['pool3_relu[0][0]']
pool3_pool (AveragePooling2D) (None, 8, 8, 256) 0 ['pool3_conv[0][0]']
conv4_block1_0_bn (BatchNormal (None, 8, 8, 256) 1024 ['pool3_pool[0][0]']
ization)
conv4_block1_0_relu (Activatio (None, 8, 8, 256) 0 ['conv4_block1_0_bn[0][0]']
n)
conv4_block1_1_conv (Conv2D) (None, 8, 8, 128) 32768 ['conv4_block1_0_relu[0][0]']
conv4_block1_1_bn (BatchNormal (None, 8, 8, 128) 512 ['conv4_block1_1_conv[0][0]']
ization)
conv4_block1_1_relu (Activatio (None, 8, 8, 128) 0 ['conv4_block1_1_bn[0][0]']
n)
conv4_block1_2_conv (Conv2D) (None, 8, 8, 32) 36864 ['conv4_block1_1_relu[0][0]']
conv4_block1_concat (Concatena (None, 8, 8, 288) 0 ['pool3_pool[0][0]',
te) 'conv4_block1_2_conv[0][0]']
conv4_block2_0_bn (BatchNormal (None, 8, 8, 288) 1152 ['conv4_block1_concat[0][0]']
ization)
conv4_block2_0_relu (Activatio (None, 8, 8, 288) 0 ['conv4_block2_0_bn[0][0]']
n)
conv4_block2_1_conv (Conv2D) (None, 8, 8, 128) 36864 ['conv4_block2_0_relu[0][0]']
conv4_block2_1_bn (BatchNormal (None, 8, 8, 128) 512 ['conv4_block2_1_conv[0][0]']
ization)
conv4_block2_1_relu (Activatio (None, 8, 8, 128) 0 ['conv4_block2_1_bn[0][0]']
n)
conv4_block2_2_conv (Conv2D) (None, 8, 8, 32) 36864 ['conv4_block2_1_relu[0][0]']
conv4_block2_concat (Concatena (None, 8, 8, 320) 0 ['conv4_block1_concat[0][0]',
te) 'conv4_block2_2_conv[0][0]']
conv4_block3_0_bn (BatchNormal (None, 8, 8, 320) 1280 ['conv4_block2_concat[0][0]']
ization)
conv4_block3_0_relu (Activatio (None, 8, 8, 320) 0 ['conv4_block3_0_bn[0][0]']
n)
conv4_block3_1_conv (Conv2D) (None, 8, 8, 128) 40960 ['conv4_block3_0_relu[0][0]']
conv4_block3_1_bn (BatchNormal (None, 8, 8, 128) 512 ['conv4_block3_1_conv[0][0]']
ization)
conv4_block3_1_relu (Activatio (None, 8, 8, 128) 0 ['conv4_block3_1_bn[0][0]']
n)
conv4_block3_2_conv (Conv2D) (None, 8, 8, 32) 36864 ['conv4_block3_1_relu[0][0]']
conv4_block3_concat (Concatena (None, 8, 8, 352) 0 ['conv4_block2_concat[0][0]',
te) 'conv4_block3_2_conv[0][0]']
conv4_block4_0_bn (BatchNormal (None, 8, 8, 352) 1408 ['conv4_block3_concat[0][0]']
ization)
conv4_block4_0_relu (Activatio (None, 8, 8, 352) 0 ['conv4_block4_0_bn[0][0]']
n)
conv4_block4_1_conv (Conv2D) (None, 8, 8, 128) 45056 ['conv4_block4_0_relu[0][0]']
conv4_block4_1_bn (BatchNormal (None, 8, 8, 128) 512 ['conv4_block4_1_conv[0][0]']
ization)
conv4_block4_1_relu (Activatio (None, 8, 8, 128) 0 ['conv4_block4_1_bn[0][0]']
n)
conv4_block4_2_conv (Conv2D) (None, 8, 8, 32) 36864 ['conv4_block4_1_relu[0][0]']
conv4_block4_concat (Concatena (None, 8, 8, 384) 0 ['conv4_block3_concat[0][0]',
te) 'conv4_block4_2_conv[0][0]']
conv4_block5_0_bn (BatchNormal (None, 8, 8, 384) 1536 ['conv4_block4_concat[0][0]']
ization)
conv4_block5_0_relu (Activatio (None, 8, 8, 384) 0 ['conv4_block5_0_bn[0][0]']
n)
conv4_block5_1_conv (Conv2D) (None, 8, 8, 128) 49152 ['conv4_block5_0_relu[0][0]']
conv4_block5_1_bn (BatchNormal (None, 8, 8, 128) 512 ['conv4_block5_1_conv[0][0]']
ization)
conv4_block5_1_relu (Activatio (None, 8, 8, 128) 0 ['conv4_block5_1_bn[0][0]']
n)
conv4_block5_2_conv (Conv2D) (None, 8, 8, 32) 36864 ['conv4_block5_1_relu[0][0]']
conv4_block5_concat (Concatena (None, 8, 8, 416) 0 ['conv4_block4_concat[0][0]',
te) 'conv4_block5_2_conv[0][0]']
conv4_block6_0_bn (BatchNormal (None, 8, 8, 416) 1664 ['conv4_block5_concat[0][0]']
ization)
conv4_block6_0_relu (Activatio (None, 8, 8, 416) 0 ['conv4_block6_0_bn[0][0]']
n)
conv4_block6_1_conv (Conv2D) (None, 8, 8, 128) 53248 ['conv4_block6_0_relu[0][0]']
conv4_block6_1_bn (BatchNormal (None, 8, 8, 128) 512 ['conv4_block6_1_conv[0][0]']
ization)
conv4_block6_1_relu (Activatio (None, 8, 8, 128) 0 ['conv4_block6_1_bn[0][0]']
n)
conv4_block6_2_conv (Conv2D) (None, 8, 8, 32) 36864 ['conv4_block6_1_relu[0][0]']
conv4_block6_concat (Concatena (None, 8, 8, 448) 0 ['conv4_block5_concat[0][0]',
te) 'conv4_block6_2_conv[0][0]']
conv4_block7_0_bn (BatchNormal (None, 8, 8, 448) 1792 ['conv4_block6_concat[0][0]']
ization)
conv4_block7_0_relu (Activatio (None, 8, 8, 448) 0 ['conv4_block7_0_bn[0][0]']
n)
conv4_block7_1_conv (Conv2D) (None, 8, 8, 128) 57344 ['conv4_block7_0_relu[0][0]']
conv4_block7_1_bn (BatchNormal (None, 8, 8, 128) 512 ['conv4_block7_1_conv[0][0]']
ization)
conv4_block7_1_relu (Activatio (None, 8, 8, 128) 0 ['conv4_block7_1_bn[0][0]']
n)
conv4_block7_2_conv (Conv2D) (None, 8, 8, 32) 36864 ['conv4_block7_1_relu[0][0]']
conv4_block7_concat (Concatena (None, 8, 8, 480) 0 ['conv4_block6_concat[0][0]',
te) 'conv4_block7_2_conv[0][0]']
conv4_block8_0_bn (BatchNormal (None, 8, 8, 480) 1920 ['conv4_block7_concat[0][0]']
ization)
conv4_block8_0_relu (Activatio (None, 8, 8, 480) 0 ['conv4_block8_0_bn[0][0]']
n)
conv4_block8_1_conv (Conv2D) (None, 8, 8, 128) 61440 ['conv4_block8_0_relu[0][0]']
conv4_block8_1_bn (BatchNormal (None, 8, 8, 128) 512 ['conv4_block8_1_conv[0][0]']
ization)
conv4_block8_1_relu (Activatio (None, 8, 8, 128) 0 ['conv4_block8_1_bn[0][0]']
n)
conv4_block8_2_conv (Conv2D) (None, 8, 8, 32) 36864 ['conv4_block8_1_relu[0][0]']
conv4_block8_concat (Concatena (None, 8, 8, 512) 0 ['conv4_block7_concat[0][0]',
te) 'conv4_block8_2_conv[0][0]']
conv4_block9_0_bn (BatchNormal (None, 8, 8, 512) 2048 ['conv4_block8_concat[0][0]']
ization)
conv4_block9_0_relu (Activatio (None, 8, 8, 512) 0 ['conv4_block9_0_bn[0][0]']
n)
conv4_block9_1_conv (Conv2D) (None, 8, 8, 128) 65536 ['conv4_block9_0_relu[0][0]']
conv4_block9_1_bn (BatchNormal (None, 8, 8, 128) 512 ['conv4_block9_1_conv[0][0]']
ization)
conv4_block9_1_relu (Activatio (None, 8, 8, 128) 0 ['conv4_block9_1_bn[0][0]']
n)
conv4_block9_2_conv (Conv2D) (None, 8, 8, 32) 36864 ['conv4_block9_1_relu[0][0]']
conv4_block9_concat (Concatena (None, 8, 8, 544) 0 ['conv4_block8_concat[0][0]',
te) 'conv4_block9_2_conv[0][0]']
conv4_block10_0_bn (BatchNorma (None, 8, 8, 544) 2176 ['conv4_block9_concat[0][0]']
lization)
conv4_block10_0_relu (Activati (None, 8, 8, 544) 0 ['conv4_block10_0_bn[0][0]']
on)
conv4_block10_1_conv (Conv2D) (None, 8, 8, 128) 69632 ['conv4_block10_0_relu[0][0]']
conv4_block10_1_bn (BatchNorma (None, 8, 8, 128) 512 ['conv4_block10_1_conv[0][0]']
lization)
conv4_block10_1_relu (Activati (None, 8, 8, 128) 0 ['conv4_block10_1_bn[0][0]']
on)
conv4_block10_2_conv (Conv2D) (None, 8, 8, 32) 36864 ['conv4_block10_1_relu[0][0]']
conv4_block10_concat (Concaten (None, 8, 8, 576) 0 ['conv4_block9_concat[0][0]',
ate) 'conv4_block10_2_conv[0][0]']
conv4_block11_0_bn (BatchNorma (None, 8, 8, 576) 2304 ['conv4_block10_concat[0][0]']
lization)
conv4_block11_0_relu (Activati (None, 8, 8, 576) 0 ['conv4_block11_0_bn[0][0]']
on)
conv4_block11_1_conv (Conv2D) (None, 8, 8, 128) 73728 ['conv4_block11_0_relu[0][0]']
conv4_block11_1_bn (BatchNorma (None, 8, 8, 128) 512 ['conv4_block11_1_conv[0][0]']
lization)
conv4_block11_1_relu (Activati (None, 8, 8, 128) 0 ['conv4_block11_1_bn[0][0]']
on)
conv4_block11_2_conv (Conv2D) (None, 8, 8, 32) 36864 ['conv4_block11_1_relu[0][0]']
conv4_block11_concat (Concaten (None, 8, 8, 608) 0 ['conv4_block10_concat[0][0]',
ate) 'conv4_block11_2_conv[0][0]']
conv4_block12_0_bn (BatchNorma (None, 8, 8, 608) 2432 ['conv4_block11_concat[0][0]']
lization)
conv4_block12_0_relu (Activati (None, 8, 8, 608) 0 ['conv4_block12_0_bn[0][0]']
on)
conv4_block12_1_conv (Conv2D) (None, 8, 8, 128) 77824 ['conv4_block12_0_relu[0][0]']
conv4_block12_1_bn (BatchNorma (None, 8, 8, 128) 512 ['conv4_block12_1_conv[0][0]']
lization)
conv4_block12_1_relu (Activati (None, 8, 8, 128) 0 ['conv4_block12_1_bn[0][0]']
on)
conv4_block12_2_conv (Conv2D) (None, 8, 8, 32) 36864 ['conv4_block12_1_relu[0][0]']
conv4_block12_concat (Concaten (None, 8, 8, 640) 0 ['conv4_block11_concat[0][0]',
ate) 'conv4_block12_2_conv[0][0]']
conv4_block13_0_bn (BatchNorma (None, 8, 8, 640) 2560 ['conv4_block12_concat[0][0]']
lization)
conv4_block13_0_relu (Activati (None, 8, 8, 640) 0 ['conv4_block13_0_bn[0][0]']
on)
conv4_block13_1_conv (Conv2D) (None, 8, 8, 128) 81920 ['conv4_block13_0_relu[0][0]']
conv4_block13_1_bn (BatchNorma (None, 8, 8, 128) 512 ['conv4_block13_1_conv[0][0]']
lization)
conv4_block13_1_relu (Activati (None, 8, 8, 128) 0 ['conv4_block13_1_bn[0][0]']
on)
conv4_block13_2_conv (Conv2D) (None, 8, 8, 32) 36864 ['conv4_block13_1_relu[0][0]']
conv4_block13_concat (Concaten (None, 8, 8, 672) 0 ['conv4_block12_concat[0][0]',
ate) 'conv4_block13_2_conv[0][0]']
conv4_block14_0_bn (BatchNorma (None, 8, 8, 672) 2688 ['conv4_block13_concat[0][0]']
lization)
conv4_block14_0_relu (Activati (None, 8, 8, 672) 0 ['conv4_block14_0_bn[0][0]']
on)
conv4_block14_1_conv (Conv2D) (None, 8, 8, 128) 86016 ['conv4_block14_0_relu[0][0]']
conv4_block14_1_bn (BatchNorma (None, 8, 8, 128) 512 ['conv4_block14_1_conv[0][0]']
lization)
conv4_block14_1_relu (Activati (None, 8, 8, 128) 0 ['conv4_block14_1_bn[0][0]']
on)
conv4_block14_2_conv (Conv2D) (None, 8, 8, 32) 36864 ['conv4_block14_1_relu[0][0]']
conv4_block14_concat (Concaten (None, 8, 8, 704) 0 ['conv4_block13_concat[0][0]',
ate) 'conv4_block14_2_conv[0][0]']
conv4_block15_0_bn (BatchNorma (None, 8, 8, 704) 2816 ['conv4_block14_concat[0][0]']
lization)
conv4_block15_0_relu (Activati (None, 8, 8, 704) 0 ['conv4_block15_0_bn[0][0]']
on)
conv4_block15_1_conv (Conv2D) (None, 8, 8, 128) 90112 ['conv4_block15_0_relu[0][0]']
conv4_block15_1_bn (BatchNorma (None, 8, 8, 128) 512 ['conv4_block15_1_conv[0][0]']
lization)
conv4_block15_1_relu (Activati (None, 8, 8, 128) 0 ['conv4_block15_1_bn[0][0]']
on)
conv4_block15_2_conv (Conv2D) (None, 8, 8, 32) 36864 ['conv4_block15_1_relu[0][0]']
conv4_block15_concat (Concaten (None, 8, 8, 736) 0 ['conv4_block14_concat[0][0]',
ate) 'conv4_block15_2_conv[0][0]']
conv4_block16_0_bn (BatchNorma (None, 8, 8, 736) 2944 ['conv4_block15_concat[0][0]']
lization)
conv4_block16_0_relu (Activati (None, 8, 8, 736) 0 ['conv4_block16_0_bn[0][0]']
on)
conv4_block16_1_conv (Conv2D) (None, 8, 8, 128) 94208 ['conv4_block16_0_relu[0][0]']
conv4_block16_1_bn (BatchNorma (None, 8, 8, 128) 512 ['conv4_block16_1_conv[0][0]']
lization)
conv4_block16_1_relu (Activati (None, 8, 8, 128) 0 ['conv4_block16_1_bn[0][0]']
on)
conv4_block16_2_conv (Conv2D) (None, 8, 8, 32) 36864 ['conv4_block16_1_relu[0][0]']
conv4_block16_concat (Concaten (None, 8, 8, 768) 0 ['conv4_block15_concat[0][0]',
ate) 'conv4_block16_2_conv[0][0]']
conv4_block17_0_bn (BatchNorma (None, 8, 8, 768) 3072 ['conv4_block16_concat[0][0]']
lization)
conv4_block17_0_relu (Activati (None, 8, 8, 768) 0 ['conv4_block17_0_bn[0][0]']
on)
conv4_block17_1_conv (Conv2D) (None, 8, 8, 128) 98304 ['conv4_block17_0_relu[0][0]']
conv4_block17_1_bn (BatchNorma (None, 8, 8, 128) 512 ['conv4_block17_1_conv[0][0]']
lization)
conv4_block17_1_relu (Activati (None, 8, 8, 128) 0 ['conv4_block17_1_bn[0][0]']
on)
conv4_block17_2_conv (Conv2D) (None, 8, 8, 32) 36864 ['conv4_block17_1_relu[0][0]']
conv4_block17_concat (Concaten (None, 8, 8, 800) 0 ['conv4_block16_concat[0][0]',
ate) 'conv4_block17_2_conv[0][0]']
conv4_block18_0_bn (BatchNorma (None, 8, 8, 800) 3200 ['conv4_block17_concat[0][0]']
lization)
conv4_block18_0_relu (Activati (None, 8, 8, 800) 0 ['conv4_block18_0_bn[0][0]']
on)
conv4_block18_1_conv (Conv2D) (None, 8, 8, 128) 102400 ['conv4_block18_0_relu[0][0]']
conv4_block18_1_bn (BatchNorma (None, 8, 8, 128) 512 ['conv4_block18_1_conv[0][0]']
lization)
conv4_block18_1_relu (Activati (None, 8, 8, 128) 0 ['conv4_block18_1_bn[0][0]']
on)
conv4_block18_2_conv (Conv2D) (None, 8, 8, 32) 36864 ['conv4_block18_1_relu[0][0]']
conv4_block18_concat (Concaten (None, 8, 8, 832) 0 ['conv4_block17_concat[0][0]',
ate) 'conv4_block18_2_conv[0][0]']
conv4_block19_0_bn (BatchNorma (None, 8, 8, 832) 3328 ['conv4_block18_concat[0][0]']
lization)
conv4_block19_0_relu (Activati (None, 8, 8, 832) 0 ['conv4_block19_0_bn[0][0]']
on)
conv4_block19_1_conv (Conv2D) (None, 8, 8, 128) 106496 ['conv4_block19_0_relu[0][0]']
conv4_block19_1_bn (BatchNorma (None, 8, 8, 128) 512 ['conv4_block19_1_conv[0][0]']
lization)
conv4_block19_1_relu (Activati (None, 8, 8, 128) 0 ['conv4_block19_1_bn[0][0]']
on)
conv4_block19_2_conv (Conv2D) (None, 8, 8, 32) 36864 ['conv4_block19_1_relu[0][0]']
conv4_block19_concat (Concaten (None, 8, 8, 864) 0 ['conv4_block18_concat[0][0]',
ate) 'conv4_block19_2_conv[0][0]']
conv4_block20_0_bn (BatchNorma (None, 8, 8, 864) 3456 ['conv4_block19_concat[0][0]']
lization)
conv4_block20_0_relu (Activati (None, 8, 8, 864) 0 ['conv4_block20_0_bn[0][0]']
on)
conv4_block20_1_conv (Conv2D) (None, 8, 8, 128) 110592 ['conv4_block20_0_relu[0][0]']
conv4_block20_1_bn (BatchNorma (None, 8, 8, 128) 512 ['conv4_block20_1_conv[0][0]']
lization)
conv4_block20_1_relu (Activati (None, 8, 8, 128) 0 ['conv4_block20_1_bn[0][0]']
on)
conv4_block20_2_conv (Conv2D) (None, 8, 8, 32) 36864 ['conv4_block20_1_relu[0][0]']
conv4_block20_concat (Concaten (None, 8, 8, 896) 0 ['conv4_block19_concat[0][0]',
ate) 'conv4_block20_2_conv[0][0]']
conv4_block21_0_bn (BatchNorma (None, 8, 8, 896) 3584 ['conv4_block20_concat[0][0]']
lization)
conv4_block21_0_relu (Activati (None, 8, 8, 896) 0 ['conv4_block21_0_bn[0][0]']
on)
conv4_block21_1_conv (Conv2D) (None, 8, 8, 128) 114688 ['conv4_block21_0_relu[0][0]']
conv4_block21_1_bn (BatchNorma (None, 8, 8, 128) 512 ['conv4_block21_1_conv[0][0]']
lization)
conv4_block21_1_relu (Activati (None, 8, 8, 128) 0 ['conv4_block21_1_bn[0][0]']
on)
conv4_block21_2_conv (Conv2D) (None, 8, 8, 32) 36864 ['conv4_block21_1_relu[0][0]']
conv4_block21_concat (Concaten (None, 8, 8, 928) 0 ['conv4_block20_concat[0][0]',
ate) 'conv4_block21_2_conv[0][0]']
conv4_block22_0_bn (BatchNorma (None, 8, 8, 928) 3712 ['conv4_block21_concat[0][0]']
lization)
conv4_block22_0_relu (Activati (None, 8, 8, 928) 0 ['conv4_block22_0_bn[0][0]']
on)
conv4_block22_1_conv (Conv2D) (None, 8, 8, 128) 118784 ['conv4_block22_0_relu[0][0]']
conv4_block22_1_bn (BatchNorma (None, 8, 8, 128) 512 ['conv4_block22_1_conv[0][0]']
lization)
conv4_block22_1_relu (Activati (None, 8, 8, 128) 0 ['conv4_block22_1_bn[0][0]']
on)
conv4_block22_2_conv (Conv2D) (None, 8, 8, 32) 36864 ['conv4_block22_1_relu[0][0]']
conv4_block22_concat (Concaten (None, 8, 8, 960) 0 ['conv4_block21_concat[0][0]',
ate) 'conv4_block22_2_conv[0][0]']
conv4_block23_0_bn (BatchNorma (None, 8, 8, 960) 3840 ['conv4_block22_concat[0][0]']
lization)
conv4_block23_0_relu (Activati (None, 8, 8, 960) 0 ['conv4_block23_0_bn[0][0]']
on)
conv4_block23_1_conv (Conv2D) (None, 8, 8, 128) 122880 ['conv4_block23_0_relu[0][0]']
conv4_block23_1_bn (BatchNorma (None, 8, 8, 128) 512 ['conv4_block23_1_conv[0][0]']
lization)
conv4_block23_1_relu (Activati (None, 8, 8, 128) 0 ['conv4_block23_1_bn[0][0]']
on)
conv4_block23_2_conv (Conv2D) (None, 8, 8, 32) 36864 ['conv4_block23_1_relu[0][0]']
conv4_block23_concat (Concaten (None, 8, 8, 992) 0 ['conv4_block22_concat[0][0]',
ate) 'conv4_block23_2_conv[0][0]']
conv4_block24_0_bn (BatchNorma (None, 8, 8, 992) 3968 ['conv4_block23_concat[0][0]']
lization)
conv4_block24_0_relu (Activati (None, 8, 8, 992) 0 ['conv4_block24_0_bn[0][0]']
on)
conv4_block24_1_conv (Conv2D) (None, 8, 8, 128) 126976 ['conv4_block24_0_relu[0][0]']
conv4_block24_1_bn (BatchNorma (None, 8, 8, 128) 512 ['conv4_block24_1_conv[0][0]']
lization)
conv4_block24_1_relu (Activati (None, 8, 8, 128) 0 ['conv4_block24_1_bn[0][0]']
on)
conv4_block24_2_conv (Conv2D) (None, 8, 8, 32) 36864 ['conv4_block24_1_relu[0][0]']
conv4_block24_concat (Concaten (None, 8, 8, 1024) 0 ['conv4_block23_concat[0][0]',
ate) 'conv4_block24_2_conv[0][0]']
pool4_bn (BatchNormalization) (None, 8, 8, 1024) 4096 ['conv4_block24_concat[0][0]']
pool4_relu (Activation) (None, 8, 8, 1024) 0 ['pool4_bn[0][0]']
pool4_conv (Conv2D) (None, 8, 8, 512) 524288 ['pool4_relu[0][0]']
pool4_pool (AveragePooling2D) (None, 4, 4, 512) 0 ['pool4_conv[0][0]']
conv5_block1_0_bn (BatchNormal (None, 4, 4, 512) 2048 ['pool4_pool[0][0]']
ization)
conv5_block1_0_relu (Activatio (None, 4, 4, 512) 0 ['conv5_block1_0_bn[0][0]']
n)
conv5_block1_1_conv (Conv2D) (None, 4, 4, 128) 65536 ['conv5_block1_0_relu[0][0]']
conv5_block1_1_bn (BatchNormal (None, 4, 4, 128) 512 ['conv5_block1_1_conv[0][0]']
ization)
conv5_block1_1_relu (Activatio (None, 4, 4, 128) 0 ['conv5_block1_1_bn[0][0]']
n)
conv5_block1_2_conv (Conv2D) (None, 4, 4, 32) 36864 ['conv5_block1_1_relu[0][0]']
conv5_block1_concat (Concatena (None, 4, 4, 544) 0 ['pool4_pool[0][0]',
te) 'conv5_block1_2_conv[0][0]']
conv5_block2_0_bn (BatchNormal (None, 4, 4, 544) 2176 ['conv5_block1_concat[0][0]']
ization)
conv5_block2_0_relu (Activatio (None, 4, 4, 544) 0 ['conv5_block2_0_bn[0][0]']
n)
conv5_block2_1_conv (Conv2D) (None, 4, 4, 128) 69632 ['conv5_block2_0_relu[0][0]']
conv5_block2_1_bn (BatchNormal (None, 4, 4, 128) 512 ['conv5_block2_1_conv[0][0]']
ization)
conv5_block2_1_relu (Activatio (None, 4, 4, 128) 0 ['conv5_block2_1_bn[0][0]']
n)
conv5_block2_2_conv (Conv2D) (None, 4, 4, 32) 36864 ['conv5_block2_1_relu[0][0]']
conv5_block2_concat (Concatena (None, 4, 4, 576) 0 ['conv5_block1_concat[0][0]',
te) 'conv5_block2_2_conv[0][0]']
conv5_block3_0_bn (BatchNormal (None, 4, 4, 576) 2304 ['conv5_block2_concat[0][0]']
ization)
conv5_block3_0_relu (Activatio (None, 4, 4, 576) 0 ['conv5_block3_0_bn[0][0]']
n)
conv5_block3_1_conv (Conv2D) (None, 4, 4, 128) 73728 ['conv5_block3_0_relu[0][0]']
conv5_block3_1_bn (BatchNormal (None, 4, 4, 128) 512 ['conv5_block3_1_conv[0][0]']
ization)
conv5_block3_1_relu (Activatio (None, 4, 4, 128) 0 ['conv5_block3_1_bn[0][0]']
n)
conv5_block3_2_conv (Conv2D) (None, 4, 4, 32) 36864 ['conv5_block3_1_relu[0][0]']
conv5_block3_concat (Concatena (None, 4, 4, 608) 0 ['conv5_block2_concat[0][0]',
te) 'conv5_block3_2_conv[0][0]']
conv5_block4_0_bn (BatchNormal (None, 4, 4, 608) 2432 ['conv5_block3_concat[0][0]']
ization)
conv5_block4_0_relu (Activatio (None, 4, 4, 608) 0 ['conv5_block4_0_bn[0][0]']
n)
conv5_block4_1_conv (Conv2D) (None, 4, 4, 128) 77824 ['conv5_block4_0_relu[0][0]']
conv5_block4_1_bn (BatchNormal (None, 4, 4, 128) 512 ['conv5_block4_1_conv[0][0]']
ization)
conv5_block4_1_relu (Activatio (None, 4, 4, 128) 0 ['conv5_block4_1_bn[0][0]']
n)
conv5_block4_2_conv (Conv2D) (None, 4, 4, 32) 36864 ['conv5_block4_1_relu[0][0]']
conv5_block4_concat (Concatena (None, 4, 4, 640) 0 ['conv5_block3_concat[0][0]',
te) 'conv5_block4_2_conv[0][0]']
conv5_block5_0_bn (BatchNormal (None, 4, 4, 640) 2560 ['conv5_block4_concat[0][0]']
ization)
conv5_block5_0_relu (Activatio (None, 4, 4, 640) 0 ['conv5_block5_0_bn[0][0]']
n)
conv5_block5_1_conv (Conv2D) (None, 4, 4, 128) 81920 ['conv5_block5_0_relu[0][0]']
conv5_block5_1_bn (BatchNormal (None, 4, 4, 128) 512 ['conv5_block5_1_conv[0][0]']
ization)
conv5_block5_1_relu (Activatio (None, 4, 4, 128) 0 ['conv5_block5_1_bn[0][0]']
n)
conv5_block5_2_conv (Conv2D) (None, 4, 4, 32) 36864 ['conv5_block5_1_relu[0][0]']
conv5_block5_concat (Concatena (None, 4, 4, 672) 0 ['conv5_block4_concat[0][0]',
te) 'conv5_block5_2_conv[0][0]']
conv5_block6_0_bn (BatchNormal (None, 4, 4, 672) 2688 ['conv5_block5_concat[0][0]']
ization)
conv5_block6_0_relu (Activatio (None, 4, 4, 672) 0 ['conv5_block6_0_bn[0][0]']
n)
conv5_block6_1_conv (Conv2D) (None, 4, 4, 128) 86016 ['conv5_block6_0_relu[0][0]']
conv5_block6_1_bn (BatchNormal (None, 4, 4, 128) 512 ['conv5_block6_1_conv[0][0]']
ization)
conv5_block6_1_relu (Activatio (None, 4, 4, 128) 0 ['conv5_block6_1_bn[0][0]']
n)
conv5_block6_2_conv (Conv2D) (None, 4, 4, 32) 36864 ['conv5_block6_1_relu[0][0]']
conv5_block6_concat (Concatena (None, 4, 4, 704) 0 ['conv5_block5_concat[0][0]',
te) 'conv5_block6_2_conv[0][0]']
conv5_block7_0_bn (BatchNormal (None, 4, 4, 704) 2816 ['conv5_block6_concat[0][0]']
ization)
conv5_block7_0_relu (Activatio (None, 4, 4, 704) 0 ['conv5_block7_0_bn[0][0]']
n)
conv5_block7_1_conv (Conv2D) (None, 4, 4, 128) 90112 ['conv5_block7_0_relu[0][0]']
conv5_block7_1_bn (BatchNormal (None, 4, 4, 128) 512 ['conv5_block7_1_conv[0][0]']
ization)
conv5_block7_1_relu (Activatio (None, 4, 4, 128) 0 ['conv5_block7_1_bn[0][0]']
n)
conv5_block7_2_conv (Conv2D) (None, 4, 4, 32) 36864 ['conv5_block7_1_relu[0][0]']
conv5_block7_concat (Concatena (None, 4, 4, 736) 0 ['conv5_block6_concat[0][0]',
te) 'conv5_block7_2_conv[0][0]']
conv5_block8_0_bn (BatchNormal (None, 4, 4, 736) 2944 ['conv5_block7_concat[0][0]']
ization)
conv5_block8_0_relu (Activatio (None, 4, 4, 736) 0 ['conv5_block8_0_bn[0][0]']
n)
conv5_block8_1_conv (Conv2D) (None, 4, 4, 128) 94208 ['conv5_block8_0_relu[0][0]']
conv5_block8_1_bn (BatchNormal (None, 4, 4, 128) 512 ['conv5_block8_1_conv[0][0]']
ization)
conv5_block8_1_relu (Activatio (None, 4, 4, 128) 0 ['conv5_block8_1_bn[0][0]']
n)
conv5_block8_2_conv (Conv2D) (None, 4, 4, 32) 36864 ['conv5_block8_1_relu[0][0]']
conv5_block8_concat (Concatena (None, 4, 4, 768) 0 ['conv5_block7_concat[0][0]',
te) 'conv5_block8_2_conv[0][0]']
conv5_block9_0_bn (BatchNormal (None, 4, 4, 768) 3072 ['conv5_block8_concat[0][0]']
ization)
conv5_block9_0_relu (Activatio (None, 4, 4, 768) 0 ['conv5_block9_0_bn[0][0]']
n)
conv5_block9_1_conv (Conv2D) (None, 4, 4, 128) 98304 ['conv5_block9_0_relu[0][0]']
conv5_block9_1_bn (BatchNormal (None, 4, 4, 128) 512 ['conv5_block9_1_conv[0][0]']
ization)
conv5_block9_1_relu (Activatio (None, 4, 4, 128) 0 ['conv5_block9_1_bn[0][0]']
n)
conv5_block9_2_conv (Conv2D) (None, 4, 4, 32) 36864 ['conv5_block9_1_relu[0][0]']
conv5_block9_concat (Concatena (None, 4, 4, 800) 0 ['conv5_block8_concat[0][0]',
te) 'conv5_block9_2_conv[0][0]']
conv5_block10_0_bn (BatchNorma (None, 4, 4, 800) 3200 ['conv5_block9_concat[0][0]']
lization)
conv5_block10_0_relu (Activati (None, 4, 4, 800) 0 ['conv5_block10_0_bn[0][0]']
on)
conv5_block10_1_conv (Conv2D) (None, 4, 4, 128) 102400 ['conv5_block10_0_relu[0][0]']
conv5_block10_1_bn (BatchNorma (None, 4, 4, 128) 512 ['conv5_block10_1_conv[0][0]']
lization)
conv5_block10_1_relu (Activati (None, 4, 4, 128) 0 ['conv5_block10_1_bn[0][0]']
on)
conv5_block10_2_conv (Conv2D) (None, 4, 4, 32) 36864 ['conv5_block10_1_relu[0][0]']
conv5_block10_concat (Concaten (None, 4, 4, 832) 0 ['conv5_block9_concat[0][0]',
ate) 'conv5_block10_2_conv[0][0]']
conv5_block11_0_bn (BatchNorma (None, 4, 4, 832) 3328 ['conv5_block10_concat[0][0]']
lization)
conv5_block11_0_relu (Activati (None, 4, 4, 832) 0 ['conv5_block11_0_bn[0][0]']
on)
conv5_block11_1_conv (Conv2D) (None, 4, 4, 128) 106496 ['conv5_block11_0_relu[0][0]']
conv5_block11_1_bn (BatchNorma (None, 4, 4, 128) 512 ['conv5_block11_1_conv[0][0]']
lization)
conv5_block11_1_relu (Activati (None, 4, 4, 128) 0 ['conv5_block11_1_bn[0][0]']
on)
conv5_block11_2_conv (Conv2D) (None, 4, 4, 32) 36864 ['conv5_block11_1_relu[0][0]']
conv5_block11_concat (Concaten (None, 4, 4, 864) 0 ['conv5_block10_concat[0][0]',
ate) 'conv5_block11_2_conv[0][0]']
conv5_block12_0_bn (BatchNorma (None, 4, 4, 864) 3456 ['conv5_block11_concat[0][0]']
lization)
conv5_block12_0_relu (Activati (None, 4, 4, 864) 0 ['conv5_block12_0_bn[0][0]']
on)
conv5_block12_1_conv (Conv2D) (None, 4, 4, 128) 110592 ['conv5_block12_0_relu[0][0]']
conv5_block12_1_bn (BatchNorma (None, 4, 4, 128) 512 ['conv5_block12_1_conv[0][0]']
lization)
conv5_block12_1_relu (Activati (None, 4, 4, 128) 0 ['conv5_block12_1_bn[0][0]']
on)
conv5_block12_2_conv (Conv2D) (None, 4, 4, 32) 36864 ['conv5_block12_1_relu[0][0]']
conv5_block12_concat (Concaten (None, 4, 4, 896) 0 ['conv5_block11_concat[0][0]',
ate) 'conv5_block12_2_conv[0][0]']
conv5_block13_0_bn (BatchNorma (None, 4, 4, 896) 3584 ['conv5_block12_concat[0][0]']
lization)
conv5_block13_0_relu (Activati (None, 4, 4, 896) 0 ['conv5_block13_0_bn[0][0]']
on)
conv5_block13_1_conv (Conv2D) (None, 4, 4, 128) 114688 ['conv5_block13_0_relu[0][0]']
conv5_block13_1_bn (BatchNorma (None, 4, 4, 128) 512 ['conv5_block13_1_conv[0][0]']
lization)
conv5_block13_1_relu (Activati (None, 4, 4, 128) 0 ['conv5_block13_1_bn[0][0]']
on)
conv5_block13_2_conv (Conv2D) (None, 4, 4, 32) 36864 ['conv5_block13_1_relu[0][0]']
conv5_block13_concat (Concaten (None, 4, 4, 928) 0 ['conv5_block12_concat[0][0]',
ate) 'conv5_block13_2_conv[0][0]']
conv5_block14_0_bn (BatchNorma (None, 4, 4, 928) 3712 ['conv5_block13_concat[0][0]']
lization)
conv5_block14_0_relu (Activati (None, 4, 4, 928) 0 ['conv5_block14_0_bn[0][0]']
on)
conv5_block14_1_conv (Conv2D) (None, 4, 4, 128) 118784 ['conv5_block14_0_relu[0][0]']
conv5_block14_1_bn (BatchNorma (None, 4, 4, 128) 512 ['conv5_block14_1_conv[0][0]']
lization)
conv5_block14_1_relu (Activati (None, 4, 4, 128) 0 ['conv5_block14_1_bn[0][0]']
on)
conv5_block14_2_conv (Conv2D) (None, 4, 4, 32) 36864 ['conv5_block14_1_relu[0][0]']
conv5_block14_concat (Concaten (None, 4, 4, 960) 0 ['conv5_block13_concat[0][0]',
ate) 'conv5_block14_2_conv[0][0]']
conv5_block15_0_bn (BatchNorma (None, 4, 4, 960) 3840 ['conv5_block14_concat[0][0]']
lization)
conv5_block15_0_relu (Activati (None, 4, 4, 960) 0 ['conv5_block15_0_bn[0][0]']
on)
conv5_block15_1_conv (Conv2D) (None, 4, 4, 128) 122880 ['conv5_block15_0_relu[0][0]']
conv5_block15_1_bn (BatchNorma (None, 4, 4, 128) 512 ['conv5_block15_1_conv[0][0]']
lization)
conv5_block15_1_relu (Activati (None, 4, 4, 128) 0 ['conv5_block15_1_bn[0][0]']
on)
conv5_block15_2_conv (Conv2D) (None, 4, 4, 32) 36864 ['conv5_block15_1_relu[0][0]']
conv5_block15_concat (Concaten (None, 4, 4, 992) 0 ['conv5_block14_concat[0][0]',
ate) 'conv5_block15_2_conv[0][0]']
conv5_block16_0_bn (BatchNorma (None, 4, 4, 992) 3968 ['conv5_block15_concat[0][0]']
lization)
conv5_block16_0_relu (Activati (None, 4, 4, 992) 0 ['conv5_block16_0_bn[0][0]']
on)
conv5_block16_1_conv (Conv2D) (None, 4, 4, 128) 126976 ['conv5_block16_0_relu[0][0]']
conv5_block16_1_bn (BatchNorma (None, 4, 4, 128) 512 ['conv5_block16_1_conv[0][0]']
lization)
conv5_block16_1_relu (Activati (None, 4, 4, 128) 0 ['conv5_block16_1_bn[0][0]']
on)
conv5_block16_2_conv (Conv2D) (None, 4, 4, 32) 36864 ['conv5_block16_1_relu[0][0]']
conv5_block16_concat (Concaten (None, 4, 4, 1024) 0 ['conv5_block15_concat[0][0]',
ate) 'conv5_block16_2_conv[0][0]']
bn (BatchNormalization) (None, 4, 4, 1024) 4096 ['conv5_block16_concat[0][0]']
relu (Activation) (None, 4, 4, 1024) 0 ['bn[0][0]']
avg_pool (GlobalAveragePooling (None, 1024) 0 ['relu[0][0]']
2D)
==================================================================================================
Total params: 7,037,504
Trainable params: 6,953,856
Non-trainable params: 83,648
__________________________________________________________________________________________________
layers = base_model.layers
print(f"The model has {len(layers)} layers")
The model has 428 layers
print(f"The input shape {base_model.input}")
print(f"The output shape {base_model.output}")
The input shape KerasTensor(type_spec=TensorSpec(shape=(None, 128, 128, 3), dtype=tf.float32, name='input_4'), name='input_4', description="created by layer 'input_4'") The output shape KerasTensor(type_spec=TensorSpec(shape=(None, 1024), dtype=tf.float32, name=None), name='avg_pool/Mean:0', description="created by layer 'avg_pool'")
#model = Sequential()
base_model = DenseNet121(include_top=False, weights='imagenet')
x = base_model.output
x = GlobalAveragePooling2D()(x)
predictions = Dense(1, activation="sigmoid")(x)
model7 = Model(inputs=base_model.input, outputs=predictions)
#model7.add(base_model)
#model7.add(GlobalAveragePooling2D())
#model7.add(Dense(1, activation='sigmoid'))
model7.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
#history7 = model7.fit(X_train,
# y_train,
# epochs=10,
# validation_data=valid_generator,
# batch_size = batch_size
# )
history7 = model7.fit(
X_train,
y_train,
epochs=epochs,
batch_size = batch_size,
validation_data=valid_generator
)
Epoch 1/30
2022-08-26 15:31:38.934348: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
182/182 [==============================] - ETA: 0s - loss: 0.4382 - accuracy: 0.7992
2022-08-26 15:33:45.929377: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
182/182 [==============================] - 148s 748ms/step - loss: 0.4382 - accuracy: 0.7992 - val_loss: 0.4678 - val_accuracy: 0.7961 Epoch 2/30 182/182 [==============================] - 127s 693ms/step - loss: 0.3910 - accuracy: 0.8216 - val_loss: 0.4785 - val_accuracy: 0.8103 Epoch 3/30 182/182 [==============================] - 126s 691ms/step - loss: 0.3760 - accuracy: 0.8282 - val_loss: 0.4846 - val_accuracy: 0.8078 Epoch 4/30 182/182 [==============================] - 125s 687ms/step - loss: 0.3640 - accuracy: 0.8347 - val_loss: 0.4362 - val_accuracy: 0.7939 Epoch 5/30 182/182 [==============================] - 126s 690ms/step - loss: 0.3506 - accuracy: 0.8387 - val_loss: 2.4647 - val_accuracy: 0.5275 Epoch 6/30 182/182 [==============================] - 126s 691ms/step - loss: 0.3277 - accuracy: 0.8550 - val_loss: 0.5198 - val_accuracy: 0.7413 Epoch 7/30 182/182 [==============================] - 126s 691ms/step - loss: 0.3079 - accuracy: 0.8634 - val_loss: 0.4728 - val_accuracy: 0.7757 Epoch 8/30 182/182 [==============================] - 261s 1s/step - loss: 0.2928 - accuracy: 0.8757 - val_loss: 1.4769 - val_accuracy: 0.6970 Epoch 9/30 182/182 [==============================] - 127s 698ms/step - loss: 0.2747 - accuracy: 0.8818 - val_loss: 0.4963 - val_accuracy: 0.7916 Epoch 10/30 182/182 [==============================] - 126s 691ms/step - loss: 0.2312 - accuracy: 0.9026 - val_loss: 0.7710 - val_accuracy: 0.7359 Epoch 11/30 182/182 [==============================] - 125s 688ms/step - loss: 0.2038 - accuracy: 0.9156 - val_loss: 0.9367 - val_accuracy: 0.6930 Epoch 12/30 182/182 [==============================] - 126s 690ms/step - loss: 0.1647 - accuracy: 0.9323 - val_loss: 0.4552 - val_accuracy: 0.8189 Epoch 13/30 182/182 [==============================] - 159s 871ms/step - loss: 0.1380 - accuracy: 0.9456 - val_loss: 0.8087 - val_accuracy: 0.7761 Epoch 14/30 182/182 [==============================] - 126s 691ms/step - loss: 0.1143 - accuracy: 0.9550 - val_loss: 0.4909 - val_accuracy: 0.8209 Epoch 15/30 182/182 [==============================] - 332s 2s/step - loss: 0.0963 - accuracy: 0.9631 - val_loss: 1.0531 - val_accuracy: 0.6376 Epoch 16/30 182/182 [==============================] - 125s 684ms/step - loss: 0.0849 - accuracy: 0.9672 - val_loss: 0.6720 - val_accuracy: 0.8220 Epoch 17/30 182/182 [==============================] - 461s 3s/step - loss: 0.0678 - accuracy: 0.9750 - val_loss: 0.8138 - val_accuracy: 0.8057 Epoch 18/30 182/182 [==============================] - 126s 690ms/step - loss: 0.0626 - accuracy: 0.9754 - val_loss: 0.8996 - val_accuracy: 0.8042 Epoch 19/30 182/182 [==============================] - 126s 688ms/step - loss: 0.0677 - accuracy: 0.9754 - val_loss: 0.6584 - val_accuracy: 0.8258 Epoch 20/30 182/182 [==============================] - 130s 716ms/step - loss: 0.0460 - accuracy: 0.9835 - val_loss: 0.7335 - val_accuracy: 0.8255 Epoch 21/30 182/182 [==============================] - 126s 690ms/step - loss: 0.0489 - accuracy: 0.9814 - val_loss: 0.9340 - val_accuracy: 0.8045 Epoch 22/30 182/182 [==============================] - 126s 693ms/step - loss: 0.0441 - accuracy: 0.9837 - val_loss: 0.7091 - val_accuracy: 0.8144 Epoch 23/30 182/182 [==============================] - 126s 690ms/step - loss: 0.0439 - accuracy: 0.9838 - val_loss: 0.8634 - val_accuracy: 0.8205 Epoch 24/30 182/182 [==============================] - 126s 689ms/step - loss: 0.0403 - accuracy: 0.9858 - val_loss: 1.3130 - val_accuracy: 0.7598 Epoch 25/30 182/182 [==============================] - 293s 2s/step - loss: 0.0328 - accuracy: 0.9883 - val_loss: 0.8829 - val_accuracy: 0.8184 Epoch 26/30 182/182 [==============================] - 126s 692ms/step - loss: 0.0368 - accuracy: 0.9863 - val_loss: 0.9353 - val_accuracy: 0.8387 Epoch 27/30 182/182 [==============================] - 125s 685ms/step - loss: 0.0264 - accuracy: 0.9902 - val_loss: 0.8630 - val_accuracy: 0.8210 Epoch 28/30 182/182 [==============================] - 126s 690ms/step - loss: 0.0327 - accuracy: 0.9876 - val_loss: 0.8096 - val_accuracy: 0.8358 Epoch 29/30 182/182 [==============================] - 127s 695ms/step - loss: 0.0364 - accuracy: 0.9868 - val_loss: 0.8021 - val_accuracy: 0.8159 Epoch 30/30 182/182 [==============================] - 126s 692ms/step - loss: 0.0320 - accuracy: 0.9881 - val_loss: 0.9067 - val_accuracy: 0.8141
model7_test_loss, model7_test_acc = model7.evaluate(X_test, y_test, verbose=1)
print('Test Loss of the model is -:', model7_test_loss)
print('Test Accuracy of the model is:', model7_test_acc)
189/189 [==============================] - 20s 89ms/step - loss: 0.9013 - accuracy: 0.8179 Test Loss of the model is -: 0.9012991189956665 Test Accuracy of the model is: 0.8178961277008057
display_loss_accuracy(history7)
# printing the dataframe
train7_acc = history7.history['accuracy']
resultDF7 = createResult("DenseNet_model",
round(train7_acc[-1]*100,2),round(model7_test_acc*100,2))
resultDF7
| Method | Accuracy | Test Score | |
|---|---|---|---|
| 0 | DenseNet_model | 98.81 | 81.79 |
evaluate_model(inception_model, X_test, y_test)
189/189 [==============================] - 7s 36ms/step ----------------------Confusion Matrix---------------------
-----------------Classification Report-----------------
precision recall f1-score support
0 0.83 0.54 0.66 1911
1 0.82 0.95 0.88 4135
accuracy 0.82 6046
macro avg 0.83 0.74 0.77 6046
weighted avg 0.82 0.82 0.81 6046
-----------------------ROC Curve-----------------------
# Predict from the test set
threshold = 0.5
# Predict from the test set
y_prob7 = model7.predict(x=X_test)
y_prob7 = y_prob7.reshape((len(y_prob7),))
# Convert to binary probabilities
y_hat7 = y_prob7 > threshold
y_hat7 = y_hat7.reshape(len(y_hat7),)
reportData = classification_report(y_test, y_hat7,output_dict=True)
for data in reportData:
if(data == '-1' or data == '1'):
if(type(reportData[data]) is dict):
for subData in reportData[data]:
resultDF7[data+"_"+subData] = reportData[data][subData]
resultDF7['1_precision'] =round(resultDF7['1_precision']*100,2)
resultDF7['1_recall'] =round(resultDF7['1_recall']*100,2)
resultDF7['1_f1-score'] =round(resultDF7['1_f1-score']*100,2)
resultDF7
2022-08-26 16:52:00.791544: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
189/189 [==============================] - 17s 80ms/step
| Method | Accuracy | Test Score | 1_precision | 1_recall | 1_f1-score | 1_support | |
|---|---|---|---|---|---|---|---|
| 0 | DenseNet_model | 98.81 | 81.79 | 81.38 | 95.14 | 87.72 | 4135 |
df =pd.concat([resultDF1,resultDF2,resultDF3,resultDF4,resultDF5,resultDF6,resultDF7],ignore_index=True)
df
| Method | Accuracy | Test Score | 1_precision | 1_recall | 1_f1-score | 1_support | |
|---|---|---|---|---|---|---|---|
| 0 | CNN model with 2 classes | 83.45 | 81.34 | 84.83 | 88.56 | 86.65 | 4135 |
| 1 | CNN model with 3 classes | 70.06 | 64.80 | 60.74 | 48.29 | 53.80 | 2365 |
| 2 | CNN model-1 with Augumentation | 79.72 | 80.28 | 84.52 | 87.13 | 85.81 | 4135 |
| 3 | CNN_VGG16_model with trainable -False | 82.00 | 79.85 | 82.30 | 89.87 | 85.92 | 4135 |
| 4 | CNN_VGG16_model with trainable -True | 68.39 | 68.39 | 68.39 | 100.00 | 81.23 | 4135 |
| 5 | inception_base_model | 88.92 | 82.04 | 81.71 | 94.99 | 87.86 | 4135 |
| 6 | DenseNet_model | 98.81 | 81.79 | 81.38 | 95.14 | 87.72 | 4135 |
test_df = pd.read_csv('stage_2_sample_submission.csv')
test_df
| patientId | PredictionString | |
|---|---|---|
| 0 | 0000a175-0e68-4ca4-b1af-167204a7e0bc | 0.5 0 0 100 100 |
| 1 | 0005d3cc-3c3f-40b9-93c3-46231c3eb813 | 0.5 0 0 100 100 |
| 2 | 000686d7-f4fc-448d-97a0-44fa9c5d3aa6 | 0.5 0 0 100 100 |
| 3 | 000e3a7d-c0ca-4349-bb26-5af2d8993c3d | 0.5 0 0 100 100 |
| 4 | 00100a24-854d-423d-a092-edcf6179e061 | 0.5 0 0 100 100 |
| ... | ... | ... |
| 2995 | c1e88810-9e4e-4f39-9306-8d314bfc1ff1 | 0.5 0 0 100 100 |
| 2996 | c1ec035b-377b-416c-a281-f868b7c9b6c3 | 0.5 0 0 100 100 |
| 2997 | c1ef5b66-0fd7-49d1-ae6b-5af84929414b | 0.5 0 0 100 100 |
| 2998 | c1ef6724-f95f-40f1-b25b-de806d9bc39d | 0.5 0 0 100 100 |
| 2999 | c1f55e7e-4065-4dc0-993e-a7c1704c6036 | 0.5 0 0 100 100 |
3000 rows × 2 columns
images = []
ADJUSTED_IMAGE_SIZE = 128
imageList = []
classLabels = []
labels = []
originalImage = []
# Function to read the image from the path and reshape the image to size
def readAndReshapeImage(image):
img = np.array(image).astype(np.uint8)
## Resize the image
res = cv2.resize(img,(ADJUSTED_IMAGE_SIZE,ADJUSTED_IMAGE_SIZE), interpolation = cv2.INTER_LINEAR)
return res
## Read the imahge and resize the image
def populateImage(rowData):
for index, row in rowData.iterrows():
patientId = row.patientId
# classlabel = row["Target"]
dcm_file = 'stage_2_test_images/'+'{}.dcm'.format(patientId)
dcm_data = dcm.read_file(dcm_file)
img = dcm_data.pixel_array
## Converting the image to 3 channels as the dicom image pixel does not have colour classes with it
if len(img.shape) != 3 or img.shape[2] != 3:
img = np.stack((img,) * 3, -1)
imageList.append(readAndReshapeImage(img))
# originalImage.append(img)
#classLabels.append(classlabel)
tmpImages = np.array(imageList)
#tmpLabels = np.array(classLabels)
# originalImages = np.array(originalImage)
return tmpImages
test_images = populateImage(test_df)
test_images.shape
(3000, 128, 128, 3)
## Checking one of the converted image
plt.imshow(test_images[1200])
<matplotlib.image.AxesImage at 0x81d48aca0>
test_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1/255)
test_generator = test_gen.flow(test_images)
predictions = (model1.predict(test_generator) > 0.5).astype("int32")
predictions
2022-08-26 17:00:27.989684: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
94/94 [==============================] - 1s 11ms/step
array([[1],
[0],
[1],
...,
[1],
[1],
[0]], dtype=int32)
inception_model.save('best_inception_model_final_26_aug.h5')
# Insights of transfer learning and final conclusion:
# Among the CNN model with 2 classes (clubbed the classes of no lung opacity and normal)
# and CNN model with 3 classes using stratify sampling (inorder to make sure of the same ratio of classes
# for modelling), it has been found that accuracy and recall of pnemonia class (class 1) are higher
# for model with 2 classes, so we have used 2 classes only for further analysis/modelling
# then we have tried the CNN model (with 2 classes) after augumenting the images and this model has shown
# slightly increase in recall percentage with relatively no change in test accuracy inspite of slight
# decrease in train accuracy
# CNN_VGG16_model with trainable -True -
# Later on we have tried with transfer learning models of VGG 16. Among the trainable- false and true,
# it has been observed that VGG 16 trainable -False has shown relatively higher accuracy when compared
# to VGG trainable- True.
# Further we have tried with inception net (Google net) and dense net. Dense net has given highest training
# accuracy in train and test much difference that mean overfitting
# among all the techniques but quites high as well recall percentage similary to inception net
# Finally, it has been found that inception net is the best among all transfer learning models being tried
# and has given highest recall percentage which is the main moto of this project since this belongs
# to health care domain and need to focus on decreasing the false negative rate